WorldWideScience

Sample records for benchmark dose method

  1. Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1

    International Nuclear Information System (INIS)

    Davis, J. Allen; Gift, Jeffrey S.; Zhao, Q. Jay

    2011-01-01

    Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addresses many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.

  2. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  3. Dose Rate Experiment at JET for Benchmarking the Calculation Direct One Step Method

    International Nuclear Information System (INIS)

    Angelone, M.; Petrizzi, L.; Pillon, M.; Villari, R.; Popovichev, S.

    2006-01-01

    Neutrons produced by D-D and D-T plasmas induce the activation of tokamak materials and of components. The development of reliable methods to assess dose rates is a key issue for maintenance and operating nuclear machines, in normal and off-normal conditions. In the frame of the EFDA Fusion Technology work programme, a computational tool based upon MCNP Monte Carlo code has been developed to predict the dose rate after shutdown: it is called Direct One Step Method (D1S). The D1S is an innovative approach in which the decay gammas are coupled to the neutrons as in the prompt case and they are transported in one single step in the same run. Benchmarking of this new tool with experimental data taken in a complex geometry like that of a tokamak is a fundamental step to test the reliability of the D1S method. A dedicated benchmark experiment was proposed for the 2005-2006 experimental campaign of JET. Two irradiation positions have been selected for the benchmark: one inner position inside the vessel, not far from the plasma, called the 2 upper irradiation end (IE2), where neutron fluence is relatively high. The second position is just outside a vertical port in an external position (EX). Here the neutron flux is lower and the dose rate to be measured is not very far from the residual background. Passive detectors are used for in-vessel measurements: the high sensitivity Thermo Luminescent Dosimeters (TLDs) GR-200A (natural LiF), which ensure measurements down to environmental dose level. An active detector of Geiger-Muller (GM) type is used for out of vessel dose rate measurement. Before their use the detectors were calibrated in a secondary gamma-ray standard (Cs-137 and Co-60) facility in term of air-kerma. The background measurement was carried-out in the period July -September 2005 in the outside position EX using the GM tube and in September 2005 inside the vacuum vessel using TLD detectors located in the 2 Upper irradiation end IE2. In the present work

  4. Nonparametric estimation of benchmark doses in environmental risk assessment

    Science.gov (United States)

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  5. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  6. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark...... approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  7. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  8. Categorical Regression and Benchmark Dose Software 3.0

    Science.gov (United States)

    The objective of this full-day course is to provide participants with interactive training on the use of the U.S. Environmental Protection Agency’s (EPA) Benchmark Dose software (BMDS, version 3.0, released fall 2018) and Categorical Regression software (CatReg, version 3.1...

  9. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana

    2014-01-01

    . Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene...

  10. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  11. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  12. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  13. Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)

    Science.gov (United States)

    EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...

  14. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    Science.gov (United States)

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  15. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  16. Benchmarking HRA methods against different NPP simulator data

    International Nuclear Information System (INIS)

    Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta

    2008-01-01

    The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described

  17. Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.

    Science.gov (United States)

    Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H

    2005-01-01

    A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.

  18. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  19. Immunotoxicity of perfluorinated alkylates: calculation of benchmark doses based on serum concentrations in children

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Joergensen, Esben

    2013-01-01

    BACKGROUND: Immune suppression may be a critical effect associated with exposure to perfluorinated compounds (PFCs), as indicated by recent data on vaccine antibody responses in children. Therefore, this information may be crucial when deciding on exposure limits. METHODS: Results obtained from...... follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...

  20. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia

    2010-01-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  1. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  2. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  3. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  4. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  5. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  6. Method of characteristics - Based sensitivity calculations for international PWR benchmark

    International Nuclear Information System (INIS)

    Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.

    2013-01-01

    Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)

  7. Benchmarking routine psychological services: a discussion of challenges and methods.

    Science.gov (United States)

    Delgadillo, Jaime; McMillan, Dean; Leach, Chris; Lucock, Mike; Gilbody, Simon; Wood, Nick

    2014-01-01

    Policy developments in recent years have led to important changes in the level of access to evidence-based psychological treatments. Several methods have been used to investigate the effectiveness of these treatments in routine care, with different approaches to outcome definition and data analysis. To present a review of challenges and methods for the evaluation of evidence-based treatments delivered in routine mental healthcare. This is followed by a case example of a benchmarking method applied in primary care. High, average and poor performance benchmarks were calculated through a meta-analysis of published data from services working under the Improving Access to Psychological Therapies (IAPT) Programme in England. Pre-post treatment effect sizes (ES) and confidence intervals were estimated to illustrate a benchmarking method enabling services to evaluate routine clinical outcomes. High, average and poor performance ES for routine IAPT services were estimated to be 0.91, 0.73 and 0.46 for depression (using PHQ-9) and 1.02, 0.78 and 0.52 for anxiety (using GAD-7). Data from one specific IAPT service exemplify how to evaluate and contextualize routine clinical performance against these benchmarks. The main contribution of this report is to summarize key recommendations for the selection of an adequate set of psychometric measures, the operational definition of outcomes, and the statistical evaluation of clinical performance. A benchmarking method is also presented, which may enable a robust evaluation of clinical performance against national benchmarks. Some limitations concerned significant heterogeneity among data sources, and wide variations in ES and data completeness.

  8. The current state of knowledge on the use of the benchmark dose concept in risk assessment.

    Science.gov (United States)

    Sand, Salomon; Victorin, Katarina; Filipsson, Agneta Falk

    2008-05-01

    This review deals with the current state of knowledge on the use of the benchmark dose (BMD) concept in health risk assessment of chemicals. The BMD method is an alternative to the traditional no-observed-adverse-effect level (NOAEL) and has been presented as a methodological improvement in the field of risk assessment. The BMD method has mostly been employed in the USA but is presently given higher attention also in Europe. The review presents a number of arguments in favor of the BMD, relative to the NOAEL. In addition, it gives a detailed overview of the several procedures that have been suggested and applied for BMD analysis, for quantal as well as continuous data. For quantal data the BMD is generally defined as corresponding to an additional or extra risk of 5% or 10%. For continuous endpoints it is suggested that the BMD is defined as corresponding to a percentage change in response relative to background or relative to the dynamic range of response. Under such definitions, a 5% or 10% change can be considered as default. Besides how to define the BMD and its lower bound, the BMDL, the question of how to select the dose-response model to be used in the BMD and BMDL determination is highlighted. Issues of study design and comparison of dose-response curves and BMDs are also covered. Copyright (c) 2007 John Wiley & Sons, Ltd.

  9. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  10. A large-scale benchmark of gene prioritization methods.

    Science.gov (United States)

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  11. Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.

    Science.gov (United States)

    Heuel-Fabianek, Burkhard; Hille, Ralf

    2005-01-01

    During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.

  12. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study.

    Science.gov (United States)

    De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M

    2017-02-01

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.

  13. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study

    Energy Technology Data Exchange (ETDEWEB)

    Bondt, Timo de; Parizel, Paul M. [Antwerp University Hospital and University of Antwerp, Department of Radiology, Antwerp (Belgium); Mulkens, Tom [H. Hart Hospital, Department of Radiology, Lier (Belgium); Zanca, Federica [GE Healthcare, DoseWatch, Buc (France); KU Leuven, Imaging and Pathology Department, Leuven (Belgium); Pyfferoen, Lotte; Casselman, Jan W. [AZ St. Jan Brugge-Oostende AV Hospital, Department of Radiology, Brugge (Belgium)

    2017-02-15

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. (orig.)

  14. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    for hierarchical data structures, reflecting increasingly common types of assay data. We illustrate the usefulness of the methodology by means of a cytotoxicology example where the sensitivity of two types of assays are evaluated and compared. By means of a simulation study, we show that the proposed framework......This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  15. Benchmarking with high-order nodal diffusion methods

    International Nuclear Information System (INIS)

    Tomasevic, D.; Larsen, E.W.

    1993-01-01

    Significant progress in the solution of multidimensional neutron diffusion problems was made in the late 1970s with the introduction of nodal methods. Modern nodal reactor analysis codes provide significant improvements in both accuracy and computing speed over earlier codes based on fine-mesh finite difference methods. In the past, the performance of advanced nodal methods was determined by comparisons with fine-mesh finite difference codes. More recently, the excellent spatial convergence of nodal methods has permitted their use in establishing reference solutions for some important bench-mark problems. The recent development of the self-consistent high-order nodal diffusion method and its subsequent variational formulation has permitted the calculation of reference solutions with one node per assembly mesh size. In this paper, we compare results for four selected benchmark problems to those obtained by high-order response matrix methods and by two well-known state-of-the-art nodal methods (the open-quotes analyticalclose quotes and open-quotes nodal expansionclose quotes methods)

  16. svclassify: a method to establish benchmark structural variant calls.

    Science.gov (United States)

    Parikh, Hemang; Mohiyuddin, Marghoob; Lam, Hugo Y K; Iyer, Hariharan; Chen, Desu; Pratt, Mark; Bartha, Gabor; Spies, Noah; Losert, Wolfgang; Zook, Justin M; Salit, Marc

    2016-01-16

    The human genome contains variants ranging in size from small single nucleotide polymorphisms (SNPs) to large structural variants (SVs). High-quality benchmark small variant calls for the pilot National Institute of Standards and Technology (NIST) Reference Material (NA12878) have been developed by the Genome in a Bottle Consortium, but no similar high-quality benchmark SV calls exist for this genome. Since SV callers output highly discordant results, we developed methods to combine multiple forms of evidence from multiple sequencing technologies to classify candidate SVs into likely true or false positives. Our method (svclassify) calculates annotations from one or more aligned bam files from many high-throughput sequencing technologies, and then builds a one-class model using these annotations to classify candidate SVs as likely true or false positives. We first used pedigree analysis to develop a set of high-confidence breakpoint-resolved large deletions. We then used svclassify to cluster and classify these deletions as well as a set of high-confidence deletions from the 1000 Genomes Project and a set of breakpoint-resolved complex insertions from Spiral Genetics. We find that likely SVs cluster separately from likely non-SVs based on our annotations, and that the SVs cluster into different types of deletions. We then developed a supervised one-class classification method that uses a training set of random non-SV regions to determine whether candidate SVs have abnormal annotations different from most of the genome. To test this classification method, we use our pedigree-based breakpoint-resolved SVs, SVs validated by the 1000 Genomes Project, and assembly-based breakpoint-resolved insertions, along with semi-automated visualization using svviz. We find that candidate SVs with high scores from multiple technologies have high concordance with PCR validation and an orthogonal consensus method MetaSV (99.7 % concordant), and candidate SVs with low scores are

  17. A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2001-01-01

    There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.

  18. The Data Envelopment Analysis Method in Benchmarking of Technological Incubators

    Directory of Open Access Journals (Sweden)

    Bożena Kaczmarska

    2010-01-01

    Full Text Available This paper presents an original concept for the application of Data Envelopment Analysis (DEA in benchmarking processes within innovation and entrepreneurship centers based on the example of technological incubators. Applying the DEA method, it is possible to order analyzed objects, on the basis of explicitly defined relative efficiency, by compiling a rating list and rating classes. Establishing standards and indicating “clearances” allows the studied objects - innovation and entrepreneurship centers - to select a way of developing effectively, as well as preserving their individuality and a unique way of acting with the account of local needs. (original abstract

  19. An international pooled analysis for obtaining a benchmark dose for environmental lead exposure in children

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Bellinger, David; Lanphear, Bruce

    2013-01-01

    Lead is a recognized neurotoxicant, but estimating effects at the lowest measurable levels is difficult. An international pooled analysis of data from seven cohort studies reported an inverse and supra-linear relationship between blood lead concentrations and IQ scores in children. The lack...... of a clear threshold presents a challenge to the identification of an acceptable level of exposure. The benchmark dose (BMD) is defined as the dose that leads to a specific known loss. As an alternative to elusive thresholds, the BMD is being used increasingly by regulatory authorities. Using the pooled data...... yielding lower confidence limits (BMDLs) of about 0.1-1.0 μ g/dL for the dose leading to a loss of one IQ point. We conclude that current allowable blood lead concentrations need to be lowered and further prevention efforts are needed to protect children from lead toxicity....

  20. Evaluation of piping fracture analysis method by benchmark study, 1

    International Nuclear Information System (INIS)

    Takahashi, Yukio; Kashima, Koichi; Kuwabara, Kazuo

    1987-01-01

    Importance of strength evaluation methods for cracked piping is growing with the progress of the rationalization of the nuclear piping system based on the leak-before-break concept. As an analytical tool, finite element method is principally used. To obtain the reliable solutions by the finite element programs, it is important to grasp the influences of various factors on the solutions. In this study, benchmark analysis is carried out for a stainless steel pipe with a circumferential through-wall crack subjected to four-point bending loading. Eight solutions obtained by using five finite element programs are compared with each other. Good agreement is obtained between the solutions on the deformation characteristics as well as fracture mechanics parameters. It is found through this study that the influence of the difference in the solution technique is generally small. (author)

  1. The Mutual Benchmarking Method for Smes’ Competitive Strategy Development

    Directory of Open Access Journals (Sweden)

    Rostek Katarzyna

    2013-12-01

    Full Text Available Competitive advantage is a relative feature, evaluated in respect of other competing enterprises. The gaining of sustainable competitive advantage is conditioned by knowledge of own performance and the results of the competitive environment. SMEs have limited opportunities to obtain such information on their own. The method of mutual benchmarking changes this situation by introducing the collaborative network. The aim of the cooperation is to support each of the group members to achieve sustainable competitive advantage, which is the result of a conscious strategy, and not only a matter of chance. This cooperation is based on the collecting and processing of data and sharing information through a common IT platform: for example, a group of Polish SMEs was shown how to implement such a common IT solution and how to provide the information preparing within the proposed service. The whole is a complete proposal for effective support of creating a competitive strategy in SMEs.

  2. Using the benchmark dose (BMD) methodology to determine an appropriate reduction of certain ingredients in food products.

    Science.gov (United States)

    Bi, Jian

    2010-01-01

    As the desire to promote health increases, reductions of certain ingredients, for example, sodium, sugar, and fat in food products, are widely requested. However, the reduction is not risk free in sensory and marketing aspects. Over reduction may change the taste and influence the flavor of a product and lead to a decrease in consumer's overall liking or purchase intent for the product. This article uses the benchmark dose (BMD) methodology to determine an appropriate reduction. Calculations of BMD and one-sided lower confidence limit of BMD are illustrated. The article also discusses how to calculate BMD and BMDL for over dispersed binary data in replicated testing based on a corrected beta-binomial model. USEPA Benchmark Dose Software (BMDS) were used and S-Plus programs were developed. The method discussed in the article is originally used to determine an appropriate reduction of certain ingredients, for example, sodium, sugar, and fat in food products, considering both health reason and sensory or marketing risk.

  3. Current modeling practice may lead to falsely high benchmark dose estimates.

    Science.gov (United States)

    Ringblom, Joakim; Johanson, Gunnar; Öberg, Mattias

    2014-07-01

    Benchmark dose (BMD) modeling is increasingly used as the preferred approach to define the point-of-departure for health risk assessment of chemicals. As data are inherently variable, there is always a risk to select a model that defines a lower confidence bound of the BMD (BMDL) that, contrary to expected, exceeds the true BMD. The aim of this study was to investigate how often and under what circumstances such anomalies occur under current modeling practice. Continuous data were generated from a realistic dose-effect curve by Monte Carlo simulations using four dose groups and a set of five different dose placement scenarios, group sizes between 5 and 50 animals and coefficients of variations of 5-15%. The BMD calculations were conducted using nested exponential models, as most BMD software use nested approaches. "Non-protective" BMDLs (higher than true BMD) were frequently observed, in some scenarios reaching 80%. The phenomenon was mainly related to the selection of the non-sigmoidal exponential model (Effect=a·e(b)(·dose)). In conclusion, non-sigmoid models should be used with caution as it may underestimate the risk, illustrating that awareness of the model selection process and sound identification of the point-of-departure is vital for health risk assessment. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink

    2017-08-01

    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  5. An Economical Approach to Estimate a Benchmark Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2003-01-01

    There are alternative methods of estimating capital stock for a benchmark year. However, these methods are costly and time-consuming, requiring the gathering of much basic information as well as the use of some convenient assumptions and guesses. In addition, a way is needed of checking whether the estimated benchmark is at the correct level. This paper proposes an optimal consistency method (OCM), which enables a capital stock to be estimated for a benchmark year, and which can also be used ...

  6. Benchmarking the minimum Electron Beam (eBeam) dose required for the sterilization of space foods

    Science.gov (United States)

    Bhatia, Sohini S.; Wall, Kayley R.; Kerth, Chris R.; Pillai, Suresh D.

    2018-02-01

    As manned space missions extend in length, the safety, nutrition, acceptability, and shelf life of space foods are of paramount importance to NASA. Since food and mealtimes play a key role in reducing stress and boredom of prolonged missions, the quality of food in terms of appearance, flavor, texture, and aroma can have significant psychological ramifications on astronaut performance. The FDA, which oversees space foods, currently requires a minimum dose of 44 kGy for irradiated space foods. The underlying hypothesis was that commercial sterility of space foods could be achieved at a significantly lower dose, and this lowered dose would positively affect the shelf life of the product. Electron beam processed beef fajitas were used as an example NASA space food to benchmark the minimum eBeam dose required for sterility. A 15 kGy dose was able to achieve an approximately 10 log reduction in Shiga-toxin-producing Escherichia coli bacteria, and a 5 log reduction in Clostridium sporogenes spores. Furthermore, accelerated shelf life testing (ASLT) to determine sensory and quality characteristics under various conditions was conducted. Using Multidimensional gas-chromatography-olfactometry-mass spectrometry (MDGC-O-MS), numerous volatiles were shown to be dependent on the dose applied to the product. Furthermore, concentrations of off -flavor aroma compounds such as dimethyl sulfide were decreased at the reduced 15 kGy dose. The results suggest that the combination of conventional cooking combined with eBeam processing (15 kGy) can achieve the safety and shelf-life objectives needed for long duration space-foods.

  7. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  8. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  9. Benchmarking burnup reconstruction methods for dynamically operated research reactors

    Energy Technology Data Exchange (ETDEWEB)

    Sternat, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Charlton, William S. [Univ. of Nebraska, Lincoln, NE (United States). National Strategic Research Institute; Nichols, Theodore F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-01

    The burnup of an HEU fueled dynamically operated research reactor, the Oak Ridge Research Reactor, was experimentally reconstructed using two different analytic methodologies and a suite of signature isotopes to evaluate techniques for estimating burnup for research reactor fuel. The methods studied include using individual signature isotopes and the complete mass spectrometry spectrum to recover the sample’s burnup. The individual, or sets of, isotopes include 148Nd, 137Cs+137Ba, 139La, and 145Nd+146Nd. The storage documentation from the analyzed fuel material provided two different measures of burnup: burnup percentage and the total power generated from the assembly in MWd. When normalized to conventional units, these two references differed by 7.8% (395.42GWd/MTHM and 426.27GWd/MTHM) in the resulting burnup for the spent fuel element used in the benchmark. Among all methods being evaluated, the results were within 11.3% of either reference burnup. The results were mixed in closeness to both reference burnups; however, consistent results were achieved from all three experimental samples.

  10. Dynamic Rupture Benchmarking of the ADER-DG Method

    Science.gov (United States)

    Gabriel, Alice; Pelties, Christian

    2013-04-01

    We will verify the arbitrary high-order derivative Discontinuous Galerkin (ADER-DG) method in various test cases of the 'SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise' benchmark suite (Harris et al. 2009). The ADER-DG scheme is able to solve the spontaneous rupture problem with high-order accuracy in space and time on three-dimensional unstructured tetrahedral meshes. Strong mesh coarsening or refinement at areas of interest can be applied to keep the computational costs feasible. Moreover, the method does not generate spurious high-frequency contributions in the slip rate spectra and therefore does not require any artificial damping as demonstrated in previous presentations and publications (Pelties et al. 2010 and 2012). We will show that the mentioned features hold also for more advanced setups as e.g. a branching fault system, heterogeneous background stresses and bimaterial faults. The advanced geometrical flexibility combined with an enhanced accuracy will make the ADER-DG method a useful tool to study earthquake dynamics on complex fault systems in realistic rheologies. References: Harris, R.A., M. Barall, R. Archuleta, B. Aagaard, J.-P. Ampuero, H. Bhat, V. Cruz-Atienza, L. Dalguer, P. Dawson, S. Day, B. Duan, E. Dunham, G. Ely, Y. Kaneko, Y. Kase, N. Lapusta, Y. Liu, S. Ma, D. Oglesby, K. Olsen, A. Pitarka, S. Song, and E. Templeton, The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise, Seismological Research Letters, vol. 80, no. 1, pages 119-126, 2009 Pelties, C., J. de la Puente, and M. Kaeser, Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method, AGU 2010 Fall Meeting, abstract #S21C-2068 Pelties, C., J. de la Puente, J.-P. Ampuero, G. Brietzke, and M. Kaeser, Three-Dimensional Dynamic Rupture Simulation with a High-order Discontinuous Galerkin Method on Unstructured Tetrahedral Meshes, JGR. - Solid Earth, VOL. 117, B02309, 2012

  11. Alignment methods: strategies, challenges, benchmarking, and comparative overview.

    Science.gov (United States)

    Löytynoja, Ari

    2012-01-01

    Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.

  12. Netherlands contribution to the EC project: Benchmark exercise on dose estimation in a regulatory context

    International Nuclear Information System (INIS)

    Stolk, D.J.

    1987-04-01

    On request of the Netherlands government FEL-TNO is developing a decision support system with the acronym RAMBOS for the assessment of the off-site consequences of an accident with hazardous materials. This is a user friendly interactive computer program, which uses very sophisticated graphical means. RAMBOS supports the emergency planning organization in two ways. Firstly, the risk to the residents in the surroundings of the accident is quantified in terms of severity and magnitude (number of casualties, etc.). Secondly, the consequences of countermeasures, such as sheltering and evacuation, are predicted. By evaluating several countermeasures the user can determine an optimum policy to reduce the impact of the accident. Within the framework of the EC project 'Benchmark exercise on dose estimation in a regulatory context' on request of the Ministry of Housing, Physical Planning and Environment calculations were carried out with the RAMBOS system. This report contains the results of these calculations. 3 refs.; 2 figs.; 10 tabs

  13. Study on the shipboard radar reconnaissance equipment azimuth benchmark method

    Science.gov (United States)

    Liu, Zhenxing; Jiang, Ning; Ma, Qian; Liu, Songtao; Wang, Longtao

    2015-10-01

    The future naval battle will take place in a complex electromagnetic environment. Therefore, seizing the electromagnetic superiority has become the major actions of the navy. Radar reconnaissance equipment is an important part of the system to obtain and master battlefield electromagnetic radiation source information. Azimuth measurement function is one of the main function radar reconnaissance equipments. Whether the accuracy of direction finding meets the requirements, determines the vessels successful or not active jamming, passive jamming, guided missile attack and other combat missions, having a direct bearing on the vessels combat capabilities . How to test the performance of radar reconnaissance equipment, while affecting the task as little as possible is a problem. This paper, based on radar signal simulator and GPS positioning equipment, researches and experiments on one new method, which povides the azimuth benchmark required by the direction-finding precision test anytime anywhere, for the ships at jetty to test radar reconnaissance equipment performance in direction-finding. It provides a powerful means for the naval radar reconnaissance equipments daily maintenance and repair work[1].

  14. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    Science.gov (United States)

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  15. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    Science.gov (United States)

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  16. Benchmark for the qualification of gamma shielding calculation methods for light-water type reactor spent fuels

    International Nuclear Information System (INIS)

    Blum, P.; Cagnon, R.; Nimal, J.C.

    1982-01-01

    This report gives the results of a campaign of gamma dose rates measurement in the vicinity of a transport package loaded with 12 PWR spent fuel assemblies, so that the characteristics of the package and the fuel. It describes the measuring methods, and gives the accuracy of the data which will be usefull, as benchmarks, to the control of the calculation methods used to verify the gamma shielding of the packages. It shows how to calculate gamma dose rates from the data given on the package and the fuel, and gives the results of a calculation with the Mecure IV code and compares them to the measurements

  17. Benchmarking electrical methods for rapid estimation of root biomass.

    Science.gov (United States)

    Postic, François; Doussan, Claude

    2016-01-01

    To face climate change and subsequent rainfall instabilities, crop breeding strategies now include root traits phenotyping. Rapid estimation of root traits in controlled conditions can be achieved by using parallel electrical capacitance and its linear correlation with root dry mass. The aim of the present study was to improve robustness and efficiency of methods based on capacitance and other electrical variables, such as serial/parallel resistance, conductance, impedance or reactance. Using different electrode configurations and stem contact electrodes, we have measured the electrical impedance spectra of wheat plants grown in pots filled with three types of soil. For each configuration, parallel capacitance and other linearly independent electrical variables were computed and their quality as root dry mass estimator was evaluated by a 'sensitivity score' that we derived from Pearson's correlation coefficient r and linear regression parameters. The highest sensitivity score was obtained by parallel capacitance at an alternating current frequency of 116 Hz in three-terminal configuration. Using a clamp, instead of a needle, as a stem electrode did not significantly affect the capacitance measurements. Finally, in handheld LCR meter equivalent conditions, capacitance had the highest sensitivity score and determination coefficient (r (2) = 0.52) at 10 kHz frequency. Our benchmarking of linear correlations between different electrical variables and root dry mass enables to determine more coherent practices for ensuring a sensitive and robust root dry mass estimation, including in handheld LCR meter conditions. This would enhance the value of electrical capacitance as a tool for screening crops in relation with root systems in breeding programs.

  18. Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William

    2012-01-01

    Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.

  19. Three anisotropic benchmark problems for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Čertík, O.; Korous, L.

    2013-01-01

    Roč. 219, č. 13 (2013), s. 7286-7295 ISSN 0096-3003 R&D Projects: GA AV ČR IAA100760702 Institutional support: RVO:61388998 Keywords : benchmark problem * anisotropic solution * boundary layer Subject RIV: BA - General Mathematics Impact factor: 1.600, year: 2013

  20. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    Shinohara, Yoshikuni; Hirota, Jitsuya

    1984-02-01

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  1. NRC-BNL Benchmark Program on Evaluation of Methods for Seismic Analysis of Coupled Systems

    International Nuclear Information System (INIS)

    Chokshi, N.; DeGrassi, G.; Xu, J.

    1999-01-01

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems

  2. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  3. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  4. Benchmarking state-of-the-art optical simulation methods for analyzing large nanophotonic structures

    DEFF Research Database (Denmark)

    Gregersen, Niels; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2018-01-01

    Five computational methods are benchmarked by computing quality factors and resonance wavelengths inphotonic crystal membrane L5 and L9 line defect cavities. Careful convergence studies reveal that some methods are more suitable than others for analyzing these cavities....

  5. Method of preparing radionuclide doses

    International Nuclear Information System (INIS)

    Kuperus, J.H.

    1987-01-01

    A method is described of preparing aliquot dosea of a tracer material useful in diagnostic nuclear medicine comprising: storing discrete quantities of a lyophilized radionuclide carrier in separate tubular containers from which air and moisture is excluded, selecting from the tubular containers a container in which is stored a carrier appropriate for a nuclear diagnostic test to be performed, interposing the selected container between the needle and the barrel of a hypodermic syringe, and drawing a predetermined amount of a liquid containing a radionuclide tracer in known concentration into the hypodermic syringe barrel through the hypodermic needle and through the selected container to dissolve the discrete quantity of lyophilized carrier therein to combine the carrier with the radionuclide tracer to form an aliquot dose of nuclear diagnostic tracer material, as needed

  6. Review of California and National Methods for Energy PerformanceBenchmarking of Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Matson, Nance E.; Piette, Mary Ann

    2005-09-05

    This benchmarking review has been developed to support benchmarking planning and tool development under discussion by the California Energy Commission (CEC), Lawrence Berkeley National Laboratory (LBNL) and others in response to the Governor's Executive Order S-20-04 (2004). The Executive Order sets a goal of benchmarking and improving the energy efficiency of California's existing commercial building stock. The Executive Order requires the CEC to propose ''a simple building efficiency benchmarking system for all commercial buildings in the state''. This report summarizes and compares two currently available commercial building energy-benchmarking tools. One tool is the U.S. Environmental Protection Agency's Energy Star National Energy Performance Rating System, which is a national regression-based benchmarking model (referred to in this report as Energy Star). The second is Lawrence Berkeley National Laboratory's Cal-Arch, which is a California-based distributional model (referred to as Cal-Arch). Prior to the time Cal-Arch was developed in 2002, there were several other benchmarking tools available to California consumers but none that were based solely on California data. The Energy Star and Cal-Arch benchmarking tools both provide California with unique and useful methods to benchmark the energy performance of California's buildings. Rather than determine which model is ''better'', the purpose of this report is to understand and compare the underlying data, information systems, assumptions, and outcomes of each model.

  7. Framework for benchmarking online retailing performance using fuzzy AHP and TOPSIS method

    Directory of Open Access Journals (Sweden)

    M. Ahsan Akhtar Hasin

    2012-08-01

    Full Text Available Due to increasing penetration of internet connectivity, on-line retail is growing from the pioneer phase to increasing integration within people's lives and companies' normal business practices. In the increasingly competitive environment, on-line retail service providers require systematic and structured approach to have cutting edge over the rival. Thus, the use of benchmarking has become indispensable to accomplish superior performance to support the on-line retail service providers. This paper uses the fuzzy analytic hierarchy process (FAHP approach to support a generic on-line retail benchmarking process. Critical success factors for on-line retail service have been identified from a structured questionnaire and literature and prioritized using fuzzy AHP. Using these critical success factors, performance levels of the ORENET an on-line retail service provider is benchmarked along with four other on-line service providers using TOPSIS method. Based on the benchmark, their relative ranking has also been illustrated.

  8. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1987-11-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. Critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 [1] methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed. The effective dose equivalent determined using ICRP-26 methods is significantly smaller than the dose equivalent determined by traditional methods. No existing personnel dosimeter or health physics instrument can determine effective dose equivalent. At the present time, the conversion of dosimeter response to dose equivalent is based on calculations for maximal or ''cap'' values using homogeneous spherical or cylindrical phantoms. The evaluated dose equivalent is, therefore, a poor approximation of the effective dose equivalent as defined by ICRP Publication 26. 3 refs., 2 figs., 1 tab

  9. Hand rub dose needed for a single disinfection varies according to product: A bias in benchmarking using indirect hand hygiene indicator

    Directory of Open Access Journals (Sweden)

    Raphaële Girard

    2012-12-01

    Results: Data from 27 products and 1706 tests were analyzed. Depending on the product, the dose needed to ensure a 30-s contact duration in 75% of tests ranging from 2 ml to more than 3 ml, and to ensure a contact duration exceeding the EN 1500 times in 75% of tests ranging from 1.5 ml to more than 3 ml. The aftermath interpretation is the following: if different products are used, the volume utilized does not give an unbiased estimation of the HH compliance. Other compliance evaluation methods remain necessary for efficient benchmarking.

  10. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  11. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  12. Fitting and benchmarking of Monte Carlo output parameters for iridium-192 high dose rate brachytherapy source

    International Nuclear Information System (INIS)

    Acquah, F.G.

    2011-01-01

    Brachytherapy, the use of radioactive sources for the treatment of tumours is an important tool in radiation oncology. Accurate calculations of dose delivered to malignant and normal tissues are the main responsibility of the Medical Physics staff. With the use of Treatment Planning System (TPS) computers now becoming a standard practice in the Radiation Oncology Departments, Independent calculations to certify the results of these commercial TPSs are important part of a good quality management system for brachytherapy implants. There are inherent errors in the dose distributions produced by these TPSs due to its failure to account for heterogeneity in the calculation algorithms and Monte Carlo (MC) method seems to be the panacea for these corrections. In this study, a fit functional form using MC output parameters was performed to reduce dose calculation uncertainty using the Matlab software curve fitting applications. This includes the modification of the AAPM TG-43 parameters to accommodate the new developments for a rapid brachytherapy dose rate calculation. Analytical computations were performed to hybridize the anisotropy function, F(r,θ) and radial dose function, g(r) into a single new function f(r,θ) for the Nucletron microSelectron High Dose Rate 'new or v2' (mHDRv2) 192 Ir brachytherapy source. In order to minimize computation time and to improve the accuracy of manual calculations, the dosimetry function f(r,θ) used fewer parameters and formulas for the fit. Using MC outputs as the standard, the percentage errors for the fits were calculated and used to evaluate the average and maximum uncertainties. Dose rate deviation between the MC data and fit were also quantified as errors(E), which showed minimal values. These results showed that the dosimetry parameters from this study as compared to those of MC outputs parameters were in good agreement and better than the results obtained from literature. The work confirms a lot of promise in building robust

  13. Simplified dose calculation method for mantle technique

    International Nuclear Information System (INIS)

    Scaff, L.A.M.

    1984-01-01

    A simplified dose calculation method for mantle technique is described. In the routine treatment of lymphom as using this technique, the daily doses at the midpoints at five anatomical regions are different because the thicknesses are not equal. (Author) [pt

  14. The Global Benchmarking as a Method of Countering the Intellectual Migration in Ukraine

    Directory of Open Access Journals (Sweden)

    Striy Lуbov A.

    2017-05-01

    Full Text Available The publication is aimed at studying the global benchmarking as a method of countering the intellectual migration in Ukraine. The article explores the intellectual process of migration in Ukraine; the current status of the country in the light of crisis and all the problems that arose has been analyzed; statistical data on the migration process are provided, the method of countering it has been determined; types of benchmarking have been considered; the benchmarking method as a way of achieving objective has been analyzed; the benefits to be derived from this method have been determined, as well as «bottlenecks» in the State process of regulating migratory flows, not only to call attention to, but also take corrective actions.

  15. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478

  16. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Radiation transport benchmarks for simple geometries with void regions using the spherical harmonics method

    International Nuclear Information System (INIS)

    Kobayashi, K.

    2009-01-01

    In 2001, an international cooperation on the 3D radiation transport benchmarks for simple geometries with void region was performed under the leadership of E. Sartori of OECD/NEA. There were contributions from eight institutions, where 6 contributions were by the discrete ordinate method and only two were by the spherical harmonics method. The 3D spherical harmonics program FFT3 by the finite Fourier transformation method has been improved for this presentation, and benchmark solutions for the 2D and 3D simple geometries with void region by the FFT2 and FFT3 are given showing fairly good accuracy. (authors)

  18. Benchmark studies of induced radioactivity and remanent dose rates produced in LHC materials

    International Nuclear Information System (INIS)

    Brugger, M.; Mayer, S.; Roesler, S.; Ulrici, L.; Khater, H.; Prinz, A.; Vincke, H.

    2005-01-01

    Samples of materials that will be used for elements of the LHC machine as well as for shielding and construction components were irradiated in the stray radiation field of the CERN-EU high-energy Reference Field facility. The materials included various types of steel, copper, titanium, concrete and marble as well as light materials such as carbon composites and boron nitride. Emphasis was put on an accurate recording of the irradiation conditions, such as irradiation profile and intensity, and on a detailed determination of the elemental composition of the samples. After the irradiation, the specific activity induced in the samples as well as the remanent dose rate were measured at different cooling times ranging from about 20 minutes to two months. Furthermore, the irradiation experiment was simulated using the FLUKA Monte Carlo code and specific activities. In addition, dose rates were calculated. The latter was based on a new method simulating the production of various isotopes and the electromagnetic cascade induced by radioactive decay at a certain cooling time. In general, solid agreement was found, which engenders confidence in the predictive power of the applied codes and tools for the estimation of the radioactive nuclide inventory of the LHC machine as well as the calculation of remanent doses to personnel during interventions. (authors)

  19. A cross-benchmark comparison of 87 learning to rank methods

    NARCIS (Netherlands)

    Tax, N.; Bockting, S.; Hiemstra, D.

    2015-01-01

    Learning to rank is an increasingly important scientific field that comprises the use of machine learning for the ranking task. New learning to rank methods are generally evaluated on benchmark test collections. However, comparison of learning to rank methods based on evaluation results is hindered

  20. GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods.

    Science.gov (United States)

    Schaffter, Thomas; Marbach, Daniel; Floreano, Dario

    2011-08-15

    Over the last decade, numerous methods have been developed for inference of regulatory networks from gene expression data. However, accurate and systematic evaluation of these methods is hampered by the difficulty of constructing adequate benchmarks and the lack of tools for a differentiated analysis of network predictions on such benchmarks. Here, we describe a novel and comprehensive method for in silico benchmark generation and performance profiling of network inference methods available to the community as an open-source software called GeneNetWeaver (GNW). In addition to the generation of detailed dynamical models of gene regulatory networks to be used as benchmarks, GNW provides a network motif analysis that reveals systematic prediction errors, thereby indicating potential ways of improving inference methods. The accuracy of network inference methods is evaluated using standard metrics such as precision-recall and receiver operating characteristic curves. We show how GNW can be used to assess the performance and identify the strengths and weaknesses of six inference methods. Furthermore, we used GNW to provide the international Dialogue for Reverse Engineering Assessments and Methods (DREAM) competition with three network inference challenges (DREAM3, DREAM4 and DREAM5). GNW is available at http://gnw.sourceforge.net along with its Java source code, user manual and supporting data. Supplementary data are available at Bioinformatics online. dario.floreano@epfl.ch.

  1. Benchmark calculations for evaluation methods of gas volumetric leakage rate

    International Nuclear Information System (INIS)

    Asano, R.; Aritomi, M.; Matsuzaki, M.

    1998-01-01

    A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)

  2. Benchmarking the inelastic neutron scattering soil carbon method

    Science.gov (United States)

    The herein described inelastic neutron scattering (INS) method of measuring soil carbon was based on a new procedure for extracting the net carbon signal (NCS) from the measured gamma spectra and determination of the average carbon weight percent (AvgCw%) in the upper soil layer (~8 cm). The NCS ext...

  3. Dose estimation by biological methods

    International Nuclear Information System (INIS)

    Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.

    1997-01-01

    The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)

  4. Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry

    International Nuclear Information System (INIS)

    Sohrabpour, M.; Hassanzadeh, M.; Shahriari, M.; Sharifzadeh, M.

    2002-01-01

    The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators

  5. Benchmarking lattice physics data and methods for boiling water reactor analysis

    International Nuclear Information System (INIS)

    Cacciapouti, R.J.; Edenius, M.; Harris, D.R.; Hebert, M.J.; Kapitz, D.M.; Pilat, E.E.; VerPlanck, D.M.

    1983-01-01

    The objective of the work reported was to verify the adequacy of lattice physics modeling for the analysis of the Vermont Yankee BWR using a multigroup, two-dimensional transport theory code. The BWR lattice physics methods have been benchmarked against reactor physics experiments, higher order calculations, and actual operating data

  6. Development of a set of benchmark problems to verify numerical methods for solving burnup equations

    International Nuclear Information System (INIS)

    Lago, Daniel; Rahnema, Farzad

    2017-01-01

    Highlights: • Description transmutation chain benchmark problems. • Problems for validating numerical methods for solving burnup equations. • Analytical solutions for the burnup equations. • Numerical solutions for the burnup equations. - Abstract: A comprehensive set of transmutation chain benchmark problems for numerically validating methods for solving burnup equations was created. These benchmark problems were designed to challenge both traditional and modern numerical methods used to solve the complex set of ordinary differential equations used for tracking the change in nuclide concentrations over time due to nuclear phenomena. Given the development of most burnup solvers is done for the purpose of coupling with an established transport solution method, these problems provide a useful resource in testing and validating the burnup equation solver before coupling for use in a lattice or core depletion code. All the relevant parameters for each benchmark problem are described. Results are also provided in the form of reference solutions generated by the Mathematica tool, as well as additional numerical results from MATLAB.

  7. Methods of bone marrow dose calculation

    International Nuclear Information System (INIS)

    Taboaco, R.C.

    1982-02-01

    Several methods of bone marrow dose calculation for photon irradiation were analised. After a critical analysis, the author proposes the adoption, by the Instituto de Radioprotecao e Dosimetria/CNEN, of Rosenstein's method for dose calculations in Radiodiagnostic examinations and Kramer's method in case of occupational irradiation. It was verified by Eckerman and Simpson that for monoenergetic gamma emitters uniformly distributed within the bone mineral of the skeleton the dose in the bone surface can be several times higher than dose in skeleton. In this way, is also proposed the Calculation of tissue-air ratios for bone surfaces in some irradiation geometries and photon energies to be included in the Rosenstein's method for organ dose calculation in Radiodiagnostic examinations. (Author) [pt

  8. Semiempirical Quantum-Chemical Orthogonalization-Corrected Methods: Benchmarks for Ground-State Properties.

    Science.gov (United States)

    Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter

    2016-03-08

    The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets.

  9. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  10. What is a food and what is a medicinal product in the European Union? Use of the benchmark dose (BMD) methodology to define a threshold for "pharmacological action".

    Science.gov (United States)

    Lachenmeier, Dirk W; Steffen, Christian; el-Atma, Oliver; Maixner, Sibylle; Löbell-Behrends, Sigrid; Kohl-Himmelseher, Matthias

    2012-11-01

    The decision criterion for the demarcation between foods and medicinal products in the EU is the significant "pharmacological action". Based on six examples of substances with ambivalent status, the benchmark dose (BMD) method is evaluated to provide a threshold for pharmacological action. Using significant dose-response models from literature clinical trial data or epidemiology, the BMD values were 63mg/day for caffeine, 5g/day for alcohol, 6mg/day for lovastatin, 769mg/day for glucosamine sulfate, 151mg/day for Ginkgo biloba extract, and 0.4mg/day for melatonin. The examples for caffeine and alcohol validate the approach because intake above BMD clearly exhibits pharmacological action. Nevertheless, due to uncertainties in dose-response modelling as well as the need for additional uncertainty factors to consider differences in sensitivity within the human population, a "borderline range" on the dose-response curve remains. "Pharmacological action" has proven to be not very well suited as binary decision criterion between foods and medicinal product. The European legislator should rethink the definition of medicinal products, as the current situation based on complicated case-by-case decisions on pharmacological action leads to an unregulated market flooded with potentially illegal food supplements. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Hand rub dose needed for a single disinfection varies according to product: a bias in benchmarking using indirect hand hygiene indicator.

    Science.gov (United States)

    Girard, Raphaële; Aupee, Martine; Erb, Martine; Bettinger, Anne; Jouve, Alice

    2012-12-01

    The 3ml volume currently used as the hand hygiene (HH) measure has been explored as the pertinent dose for an indirect indicator of HH compliance. A multicenter study was conducted in order to ascertain the required dose using different products. The average contact duration before drying was measured and compared with references. Effective hand coverage had to include the whole hand and the wrist. Two durations were chosen as points of reference: 30s, as given by guidelines, and the duration validated by the European standard EN 1500. Each product was to be tested, using standardized procedures, by three nosocomial infection prevention teams, for three different doses (3, 2 and 1.5ml). Data from 27 products and 1706 tests were analyzed. Depending on the product, the dose needed to ensure a 30-s contact duration in 75% of tests ranging from 2ml to more than 3ml, and to ensure a contact duration exceeding the EN 1500 times in 75% of tests ranging from 1.5ml to more than 3ml. The aftermath interpretation is the following: if different products are used, the volume utilized does not give an unbiased estimation of the HH compliance. Other compliance evaluation methods remain necessary for efficient benchmarking. Copyright © 2012 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  12. Benchmark measurements and simulations of dose perturbations due to metallic spheres in proton beams

    International Nuclear Information System (INIS)

    Newhauser, Wayne D.; Rechner, Laura; Mirkovic, Dragan; Yepes, Pablo; Koch, Nicholas C.; Titt, Uwe; Fontenot, Jonas D.; Zhang, Rui

    2013-01-01

    Monte Carlo simulations are increasingly used for dose calculations in proton therapy due to its inherent accuracy. However, dosimetric deviations have been found using Monte Carlo code when high density materials are present in the proton beamline. The purpose of this work was to quantify the magnitude of dose perturbation caused by metal objects. We did this by comparing measurements and Monte Carlo predictions of dose perturbations caused by the presence of small metal spheres in several clinical proton therapy beams as functions of proton beam range and drift space. Monte Carlo codes MCNPX, GEANT4 and Fast Dose Calculator (FDC) were used. Generally good agreement was found between measurements and Monte Carlo predictions, with the average difference within 5% and maximum difference within 17%. The modification of multiple Coulomb scattering model in MCNPX code yielded improvement in accuracy and provided the best overall agreement with measurements. Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy beams when short drift spaces are involved. - Highlights: • We compared measurements and Monte Carlo predictions of dose perturbations caused by the metal objects in proton beams. • Different Monte Carlo codes were used, including MCNPX, GEANT4 and Fast Dose Calculator. • Good agreement was found between measurements and Monte Carlo simulations. • The modification of multiple Coulomb scattering model in MCNPX code yielded improved accuracy. • Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy

  13. Estimate of safe human exposure levels for lunar dust based on comparative benchmark dose modeling.

    Science.gov (United States)

    James, John T; Lam, Chiu-Wing; Santana, Patricia A; Scully, Robert R

    2013-04-01

    Brief exposures of Apollo astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure to lunar dust. The United States and other space faring nations intend to return to the moon for extensive exploration within a few decades. In the meantime, habitats for that exploration, whether mobile or fixed, must be designed to limit human exposure to lunar dust to safe levels. Herein we estimate safe exposure limits for lunar dust collected during the Apollo 14 mission. We instilled three respirable-sized (∼2 μ mass median diameter) lunar dusts (two ground and one unground) and two standard dusts of widely different toxicities (quartz and TiO₂) into the respiratory system of rats. Rats in groups of six were given 0, 1, 2.5 or 7.5 mg of the test dust in a saline-Survanta® vehicle, and biochemical and cellular biomarkers of toxicity in lung lavage fluid were assayed 1 week and one month after instillation. By comparing the dose--response curves of sensitive biomarkers, we estimated safe exposure levels for astronauts and concluded that unground lunar dust and dust ground by two different methods were not toxicologically distinguishable. The safe exposure estimates were 1.3 ± 0.4 mg/m³ (jet-milled dust), 1.0 ± 0.5 mg/m³ (ball-milled dust) and 0.9 ± 0.3 mg/m³ (unground, natural dust). We estimate that 0.5-1 mg/m³ of lunar dust is safe for periodic human exposures during long stays in habitats on the lunar surface.

  14. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  15. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  16. Methods of assessing total doses integrated across pathways

    International Nuclear Information System (INIS)

    Grzechnik, M.; Camplin, W.; Clyne, F.; Allott, R.; Webbe-Wood, D.

    2006-01-01

    future years. C) Construct Individuals with high rates of consumption or occupancy across all pathways are used to derive rates for each pathway. These are applied in future years. D) Top-Two High and average consumption and occupancy rates for each pathway are derived. Doses can be calculated for all combinations where two pathways are considered at high rates and the remainder as average E) Profiling A profile is derived by calculating consumption and occupancy rates for each pathway for individuals who exhibit high rates for a single pathway. Other profiles may be built by repeating for other pathways. Total dose is the highest dose for any profile, and that profile becomes known as the critical group. Method A was used as a benchmark, with methods B -E compared according to the previously specified criteria. Overall the profiling method of total dose calculation was adopted due to its favourable overall comparison with the individual method and the homogeneity of the critical group selected. (authors)

  17. 77 FR 36533 - Notice of Availability of the Benchmark Dose Technical Guidance

    Science.gov (United States)

    2012-06-19

    ... environment, the EPA routinely conducts risk assessments on chemical agents that may be toxic to humans. A key component of the risk assessment process involves evaluating the dose-response relationship between exposure... BMD methodology for human health risk assessments. The document discusses computation of BMD values...

  18. Dose mapping simulation using the MCNP code for the Syrian gamma irradiation facility and benchmarking

    International Nuclear Information System (INIS)

    Khattab, K.; Boush, M.; Alkassiri, H.

    2013-01-01

    Highlights: • The MCNP4C was used to calculate the gamma ray dose rate spatial distribution in for the SGIF. • Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter was conducted as well. • Good agreements were noticed between the calculated and measured results. • The maximum relative differences were less than 7%, 4% and 4% in the x, y and z directions respectively. - Abstract: A three dimensional model for the Syrian gamma irradiation facility (SGIF) is developed in this paper to calculate the gamma ray dose rate spatial distribution in the irradiation room at the 60 Co source board using the MCNP-4C code. Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter is conducted as well to compare the calculated and measured results. Good agreements are noticed between the calculated and measured results with maximum relative differences less than 7%, 4% and 4% in the x, y and z directions respectively. This agreement indicates that the established model is an accurate representation of the SGIF and can be used in the future to make the calculation design for a new irradiation facility

  19. Consortial Benchmarking: a method of academic-practitioner collaborative research and its application in a B2B environment

    NARCIS (Netherlands)

    Schiele, Holger; Krummaker, Stefan

    2010-01-01

    Purpose of the paper and literature addressed: Development of a new method for academicpractitioner collaboration, addressing the literature on collaborative research Research method: Model elaboration and test with an in-depth case study Research findings: In consortial benchmarking, practitioners

  20. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  1. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole

    2012-01-01

    biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...... evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version...

  2. Mechanism-based risk assessment strategy for drug-induced cholestasis using the transcriptional benchmark dose derived by toxicogenomics.

    Science.gov (United States)

    Kawamoto, Taisuke; Ito, Yuichi; Morita, Osamu; Honda, Hiroshi

    2017-01-01

    Cholestasis is one of the major causes of drug-induced liver injury (DILI), which can result in withdrawal of approved drugs from the market. Early identification of cholestatic drugs is difficult due to the complex mechanisms involved. In order to develop a strategy for mechanism-based risk assessment of cholestatic drugs, we analyzed gene expression data obtained from the livers of rats that had been orally administered with 12 known cholestatic compounds repeatedly for 28 days at three dose levels. Qualitative analyses were performed using two statistical approaches (hierarchical clustering and principle component analysis), in addition to pathway analysis. The transcriptional benchmark dose (tBMD) and tBMD 95% lower limit (tBMDL) were used for quantitative analyses, which revealed three compound sub-groups that produced different types of differential gene expression; these groups of genes were mainly involved in inflammation, cholesterol biosynthesis, and oxidative stress. Furthermore, the tBMDL values for each test compound were in good agreement with the relevant no observed adverse effect level. These results indicate that our novel strategy for drug safety evaluation using mechanism-based classification and tBMDL would facilitate the application of toxicogenomics for risk assessment of cholestatic DILI.

  3. Benchmarking residual dose rates in a NuMI-like environment

    Energy Technology Data Exchange (ETDEWEB)

    Igor L. Rakhno et al.

    2001-11-02

    Activation of various structural and shielding materials is an important issue for many applications. A model developed recently to calculate residual activity of arbitrary composite materials for arbitrary irradiation and cooling times is presented in the paper. Measurements have been performed at the Fermi National Accelerator Laboratory using a 120 GeV proton beam to study induced radioactivation of materials used for beam line components and shielding. The calculated residual dose rates for the samples studied behind the target and outside of the thick shielding are presented and compared with the measured ones. Effects of energy spectra, sample material and dimensions, their distance from the shielding, and gaps between the shielding modules and walls as well as between the modules themselves were studied in detail.

  4. Determining the sensitivity of Data Envelopment Analysis method used in airport benchmarking

    Directory of Open Access Journals (Sweden)

    Mircea BOSCOIANU

    2013-03-01

    Full Text Available In the last decade there were some important changes in the airport industry, caused by the liberalization of the air transportation market. Until recently airports were considered infrastructure elements, and they were evaluated only by traffic values or their maximum capacity. Gradual orientation towards commercial led to the need of finding another ways of evaluation, more efficiency oriented. The existing methods for assessing efficiency used for other production units were not suitable to be used in case of airports due to specific features and high complexity of airport operations. In the last years there were some papers that proposed the Data Envelopment Analysis as a method for assessing the operational efficiency in order to conduct the benchmarking. This method offers the possibility of dealing with a large number of variables of different types, which represents the main advantage of this method and also recommends it as a good benchmarking tool for the airports management. This paper goal is to determine the sensitivity of this method in relation with its inputs and outputs. A Data Envelopment Analysis is conducted for 128 airports worldwide, in both input- and output-oriented measures, and the results are analysed against some inputs and outputs variations. Possible weaknesses of using DEA for assessing airports performance are revealed and analysed against this method advantages.

  5. Investigation on method of elasto-plastic analysis for piping system (benchmark analysis)

    International Nuclear Information System (INIS)

    Kabaya, Takuro; Kojima, Nobuyuki; Arai, Masashi

    2015-01-01

    This paper provides method of an elasto-plastic analysis for practical seismic design of nuclear piping system. JSME started up the task to establish method of an elasto-plastic analysis for nuclear piping system. The benchmark analyses have been performed in the task to investigate on method of an elasto-plastic analysis. And our company has participated in the benchmark analyses. As a result, we have settled on the method which simulates the result of piping exciting test accurately. Therefore the recommended method of an elasto-plastic analysis is shown as follows; 1) An elasto-plastic analysis is composed of dynamic analysis of piping system modeled by using beam elements and static analysis of deformed elbow modeled by using shell elements. 2) Bi-linear is applied as an elasto-plastic property. Yield point is standardized yield point multiplied by 1.2 times, and second gradient is 1/100 young's modulus. Kinematic hardening is used as a hardening rule. 3) The fatigue life is evaluated on strain ranges obtained by elasto-plastic analysis, by using the rain flow method and the fatigue curve of previous studies. (author)

  6. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4

    International Nuclear Information System (INIS)

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-01-01

    The expanding clinical use of low-energy photon emitting 125 I and 103 Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst ±5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately ±2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV

  7. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    Science.gov (United States)

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  8. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    International Nuclear Information System (INIS)

    Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.; Toney, B.

    1982-01-01

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections

  9. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  10. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  11. Deflection-based method for seismic response analysis of concrete walls: Benchmarking of CAMUS experiment

    International Nuclear Information System (INIS)

    Basu, Prabir C.; Roshan, A.D.

    2007-01-01

    A number of shake table tests had been conducted on the scaled down model of a concrete wall as part of CAMUS experiment. The experiments were conducted between 1996 and 1998 in the CEA facilities in Saclay, France. Benchmarking of CAMUS experiments was undertaken as a part of the coordinated research program on 'Safety Significance of Near-Field Earthquakes' organised by International Atomic Energy Agency (IAEA). Technique of deflection-based method was adopted for benchmarking exercise. Non-linear static procedure of deflection-based method has two basic steps: pushover analysis, and determination of target displacement or performance point. Pushover analysis is an analytical procedure to assess the capacity to withstand seismic loading effect that a structural system can offer considering the redundancies and inelastic deformation. Outcome of a pushover analysis is the plot of force-displacement (base shear-top/roof displacement) curve of the structure. This is obtained by step-by-step non-linear static analysis of the structure with increasing value of load. The second step is to determine target displacement, which is also known as performance point. The target displacement is the likely maximum displacement of the structure due to a specified seismic input motion. Established procedures, FEMA-273 and ATC-40, are available to determine this maximum deflection. The responses of CAMUS test specimen are determined by deflection-based method and analytically calculated values compare well with the test results

  12. Correlation of In Vivo Versus In Vitro Benchmark Doses (BMDs) Derived From Micronucleus Test Data: A Proof of Concept Study.

    Science.gov (United States)

    Soeteman-Hernández, Lya G; Fellows, Mick D; Johnson, George E; Slob, Wout

    2015-12-01

    In this study, we explored the applicability of using in vitro micronucleus (MN) data from human lymphoblastoid TK6 cells to derive in vivo genotoxicity potency information. Nineteen chemicals covering a broad spectrum of genotoxic modes of action were tested in an in vitro MN test using TK6 cells using the same study protocol. Several of these chemicals were considered to need metabolic activation, and these were administered in the presence of S9. The Benchmark dose (BMD) approach was applied using the dose-response modeling program PROAST to estimate the genotoxic potency from the in vitro data. The resulting in vitro BMDs were compared with previously derived BMDs from in vivo MN and carcinogenicity studies. A proportional correlation was observed between the BMDs from the in vitro MN and the BMDs from the in vivo MN assays. Further, a clear correlation was found between the BMDs from in vitro MN and the associated BMDs for malignant tumors. Although these results are based on only 19 compounds, they show that genotoxicity potencies estimated from in vitro tests may result in useful information regarding in vivo genotoxic potency, as well as expected cancer potency. Extension of the number of compounds and further investigation of metabolic activation (S9) and of other toxicokinetic factors would be needed to validate our initial conclusions. However, this initial work suggests that this approach could be used for in vitro to in vivo extrapolations which would support the reduction of animals used in research (3Rs: replacement, reduction, and refinement). © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology.

  13. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience

    International Nuclear Information System (INIS)

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Joergen; Nyholm, Tufve; Ahnesjoe, Anders; Karlsson, Mikael

    2007-01-01

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm 3 ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 ± 1.2% and 0.5 ± 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 ± 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach

  14. Benchmark experiments of dose distributions in phantom placed behind iron and concrete shields at the TIARA facility

    International Nuclear Information System (INIS)

    Nakane, Yoshihiro; Sakamoto, Yukio; Tsuda, Shuichi

    2004-01-01

    To verify the calculation methods used for the evaluations of neutron dose at the radiation shielding design of the high-intensity proton accelerator facility (J-PARC), dose distributions in a plastic phantom of 30x30x30 cm 3 slab placed behind iron and concrete test shields were measured by using a tissue equivalent proportional counter for 65-MeV quasi-monoenergetic neutrons generated from the 7 Li(p,n) reactions with 68-MeV protons at the TIARA facility. Dose distributions in the phantom were calculated by using the MCNPX and the NMTC/JAM-MCNP codes with the flux-to-dose conversion coefficients prepared for the shielding design of the facility. The comparison results show the calculated results were in good agreement with the measured ones within 20%. (author)

  15. In silico toxicology: comprehensive benchmarking of multi-label classification methods applied to chemical toxicity data

    KAUST Repository

    Raies, Arwa B.

    2017-12-05

    One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds\\' features may improve model\\'s performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.

  16. In silico toxicology: comprehensive benchmarking of multi-label classification methods applied to chemical toxicity data

    KAUST Repository

    Raies, Arwa B.; Bajic, Vladimir B.

    2017-01-01

    One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds' features may improve model's performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.

  17. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  18. Track benchmarking method for uncertainty quantification of particle tracking velocimetry interpolations

    International Nuclear Information System (INIS)

    Schneiders, Jan F G; Sciacchitano, Andrea

    2017-01-01

    The track benchmarking method (TBM) is proposed for uncertainty quantification of particle tracking velocimetry (PTV) data mapped onto a regular grid. The method provides statistical uncertainty for a velocity time-series and can in addition be used to obtain instantaneous uncertainty at increased computational cost. Interpolation techniques are typically used to map velocity data from scattered PTV (e.g. tomographic PTV and Shake-the-Box) measurements onto a Cartesian grid. Recent examples of these techniques are the FlowFit and VIC+  methods. The TBM approach estimates the random uncertainty in dense velocity fields by performing the velocity interpolation using a subset of typically 95% of the particle tracks and by considering the remaining tracks as an independent benchmarking reference. In addition, also a bias introduced by the interpolation technique is identified. The numerical assessment shows that the approach is accurate when particle trajectories are measured over an extended number of snapshots, typically on the order of 10. When only short particle tracks are available, the TBM estimate overestimates the measurement error. A correction to TBM is proposed and assessed to compensate for this overestimation. The experimental assessment considers the case of a jet flow, processed both by tomographic PIV and by VIC+. The uncertainty obtained by TBM provides a quantitative evaluation of the measurement accuracy and precision and highlights the regions of high error by means of bias and random uncertainty maps. In this way, it is possible to quantify the uncertainty reduction achieved by advanced interpolation algorithms with respect to standard correlation-based tomographic PIV. The use of TBM for uncertainty quantification and comparison of different processing techniques is demonstrated. (paper)

  19. Generalization of Asaoka method to linearly anisotropic scattering: benchmark data in cylindrical geometry

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1975-11-01

    The Integral Transform Method for the neutron transport equation has been developed in last years by Asaoka and others. The method uses Fourier transform techniques in solving isotropic one-dimensional transport problems in homogeneous media. The method has been extended to linearly anisotropic transport in one-dimensional homogeneous media. Series expansions were also obtained using Hembd techniques for the new anisotropic matrix elements in cylindrical geometry. Carlvik spatial-spherical harmonics method was generalized to solve the same problem. By applying a relation between the isotropic and anisotropic one-dimensional kernels, it was demonstrated that anisotropic matrix elements can be calculated by a linear combination of a few isotropic matrix elements. This means in practice that the anisotropic problem of order N with the N+2 isotropic matrix for the plane and spherical geometries, and N+1 isotropic matrix for cylindrical geometries can be solved. A method of solving linearly anisotropic one-dimensional transport problems in homogeneous media was defined by applying Mika and Stankiewicz observations: isotropic matrix elements were computed by Hembd series and anisotropic matrix elements then calculated from recursive relations. The method has been applied to albedo and critical problems in cylindrical geometries. Finally, a number of results were computed with 12-digit accuracy for use as benchmarks [fr

  20. Benchmarking of London Dispersion-Accounting Density Functional Theory Methods on Very Large Molecular Complexes.

    Science.gov (United States)

    Risthaus, Tobias; Grimme, Stefan

    2013-03-12

    A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.

  1. Use of benchmark dose-volume histograms for selection of the optimal technique between three-dimensional conformal radiation therapy and intensity-modulated radiation therapy in prostate cancer

    International Nuclear Information System (INIS)

    Luo Chunhui; Yang, Claus Chunli; Narayan, Samir; Stern, Robin L.; Perks, Julian; Goldberg, Zelanna; Ryu, Janice; Purdy, James A.; Vijayakumar, Srinivasan

    2006-01-01

    Purpose: The aim of this study was to develop and validate our own benchmark dose-volume histograms (DVHs) of bladder and rectum for both conventional three-dimensional conformal radiation therapy (3D-CRT) and intensity-modulated radiation therapy (IMRT), and to evaluate quantitatively the benefits of using IMRT vs. 3D-CRT in treating localized prostate cancer. Methods and Materials: During the implementation of IMRT for prostate cancer, our policy was to plan each patient with both 3D-CRT and IMRT. This study included 31 patients with T1b to T2c localized prostate cancer, for whom we completed double-planning using both 3D-CRT and IMRT techniques. The target volumes included prostate, either with or without proximal seminal vesicles. Bladder and rectum DVH data were summarized to obtain an average DVH for each technique and then compared using two-tailed paired t test analysis. Results: For 3D-CRT our bladder doses were as follows: mean 28.8 Gy, v60 16.4%, v70 10.9%; rectal doses were: mean 39.3 Gy, v60 21.8%, v70 13.6%. IMRT plans resulted in similar mean dose values: bladder 26.4 Gy, rectum 34.9 Gy, but lower values of v70 for the bladder (7.8%) and rectum (9.3%). These benchmark DVHs have resulted in a critical evaluation of our 3D-CRT techniques over time. Conclusion: Our institution has developed benchmark DVHs for bladder and rectum based on our clinical experience with 3D-CRT and IMRT. We use these standards as well as differences in individual cases to make decisions on whether patients may benefit from IMRT treatment rather than 3D-CRT

  2. Bacterial whole genome-based phylogeny: construction of a new benchmarking dataset and assessment of some existing methods

    DEFF Research Database (Denmark)

    Ahrenfeldt, Johanne; Skaarup, Carina; Hasman, Henrik

    2017-01-01

    from sequencing reads. In the present study we describe a new dataset that we have created for the purpose of benchmarking such WGS-based methods for epidemiological data, and also present an analysis where we use the data to compare the performance of some current methods. Results Our aim...

  3. Evaluating the Resilience of the Bottom-up Method used to Detect and Benchmark the Smartness of University Campuses

    NARCIS (Netherlands)

    Giovannella, Carlo; Andone, Diana; Dascalu, Mihai; Popescu, Elvira; Rehm, Matthias; Mealha, Oscar

    2017-01-01

    A new method to perform a bottom-up extraction and benchmark of the perceived multilevel smartness of complex ecosystems has been recently described and applied to territories and learning ecosystems like university campuses and schools. In this paper we study the resilience of our method

  4. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    Science.gov (United States)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  5. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    International Nuclear Information System (INIS)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-01-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  6. Benchmarking of a novel contactless characterisation method for micro thermoelectric modules (μTEMs)

    International Nuclear Information System (INIS)

    Hickey, S; Punch, J; Jeffers, N

    2014-01-01

    Significant challenges exist in the thermal control of Photonics Integrated Circuits (PICs) for use in optical communications. Increasing component density coupled with greater functionality is leading to higher device-level heat fluxes, stretching the capabilities of conventional cooling methods using thermoelectric modules (TEMs). A tailored thermal control solution incorporating micro thermoelectric modules (μTEMs) to individually address hotspots within PICs could provide an energy efficient alternative to existing control methods. Performance characterisation is required to establish the suitability of commercially-available μTEMs for the operating conditions in current and next generation PICs. The objective of this paper is to outline a novel method for the characterisation of thermoelectric modules (TEMs), which utilises infra-red (IR) heat transfer and temperature measurement to obviate the need for mechanical stress on the upper surface of low compression tolerance (∼0.5N) μTEMs. The method is benchmarked using a commercially-available macro scale TEM, comparing experimental data to the manufacturer's performance data sheet.

  7. Benchmarking of a novel contactless characterisation method for micro thermoelectric modules (μTEMs)

    Science.gov (United States)

    Hickey, S.; Punch, J.; Jeffers, N.

    2014-07-01

    Significant challenges exist in the thermal control of Photonics Integrated Circuits (PICs) for use in optical communications. Increasing component density coupled with greater functionality is leading to higher device-level heat fluxes, stretching the capabilities of conventional cooling methods using thermoelectric modules (TEMs). A tailored thermal control solution incorporating micro thermoelectric modules (μTEMs) to individually address hotspots within PICs could provide an energy efficient alternative to existing control methods. Performance characterisation is required to establish the suitability of commercially-available μTEMs for the operating conditions in current and next generation PICs. The objective of this paper is to outline a novel method for the characterisation of thermoelectric modules (TEMs), which utilises infra-red (IR) heat transfer and temperature measurement to obviate the need for mechanical stress on the upper surface of low compression tolerance (~0.5N) μTEMs. The method is benchmarked using a commercially-available macro scale TEM, comparing experimental data to the manufacturer's performance data sheet.

  8. The MIRD method of estimating absorbed dose

    International Nuclear Information System (INIS)

    Weber, D.A.

    1991-01-01

    The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine

  9. An unbiased method to build benchmarking sets for ligand-based virtual screening and its application to GPCRs.

    Science.gov (United States)

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-05-27

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.

  10. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Directory of Open Access Journals (Sweden)

    Domonkos Tikk

    Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study

  11. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  12. A two-dimensional method of manufactured solutions benchmark suite based on variations of Larsen's benchmark with escalating order of smoothness of the exact solution

    International Nuclear Information System (INIS)

    Schunert, Sebastian; Azmy, Yousry Y.

    2011-01-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally ne mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite rst eliminates the aforementioned limitation of ne mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme. (author)

  13. Survey of methods used to asses human reliability in the human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1988-01-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim to assess the state-of-the-art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participate in the HF-RBE, which is organised around two study cases: (1) analysis of routine functional test and maintenance procedures, with the aim to assess the probability of test-induced failures, the probability of failures to remain unrevealed, and the potential to initiate transients because of errors performed in the test; and (2) analysis of human actions during an operational transient, with the aim to assess the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. The paper briefly reports how the HF-RBE was structured and gives an overview of the methods that have been used for predicting human reliability in both study cases. The experience in applying these methods is discussed and the results obtained are compared. (author)

  14. Anomaly detection in OECD Benchmark data using co-variance methods

    International Nuclear Information System (INIS)

    Srinivasan, G.S.; Krinizs, K.; Por, G.

    1993-02-01

    OECD Benchmark data distributed for the SMORN VI Specialists Meeting in Reactor Noise were investigated for anomaly detection in artificially generated reactor noise benchmark analysis. It was observed that statistical features extracted from covariance matrix of frequency components are very sensitive in terms of the anomaly detection level. It is possible to create well defined alarm levels. (R.P.) 5 refs.; 23 figs.; 1 tab

  15. How to benchmark methods for structure-based virtual screening of large compound libraries.

    Science.gov (United States)

    Christofferson, Andrew J; Huang, Niu

    2012-01-01

    Structure-based virtual screening is a useful computational technique for ligand discovery. To systematically evaluate different docking approaches, it is important to have a consistent benchmarking protocol that is both relevant and unbiased. Here, we describe the designing of a benchmarking data set for docking screen assessment, a standard docking screening process, and the analysis and presentation of the enrichment of annotated ligands among a background decoy database.

  16. Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data

    International Nuclear Information System (INIS)

    Behar, Joachim; Oster, Julien; Clifford, Gari D

    2014-01-01

    Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques. The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected. The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1–5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. (paper)

  17. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  18. New methods to benchmark simulations of accreting black holes systems against observations

    Science.gov (United States)

    Markoff, Sera; Chatterjee, Koushik; Liska, Matthew; Tchekhovskoy, Alexander; Hesp, Casper; Ceccobello, Chiara; Russell, Thomas

    2017-08-01

    The field of black hole accretion has been significantly advanced by the use of complex ideal general relativistic magnetohydrodynamics (GRMHD) codes, now capable of simulating scales from the event horizon out to ~10^5 gravitational radii at high resolution. The challenge remains how to test these simulations against data, because the self-consistent treatment of radiation is still in its early days, and is complicated by dependence on non-ideal/microphysical processes not yet included in the codes. On the other extreme, a variety of phenomenological models (disk, corona, jet, wind) can well-describe spectra or variability signatures in a particular waveband, although often not both. To bring these two methodologies together, we need robust observational “benchmarks” that can be identified and studied in simulations. I will focus on one example of such a benchmark, from recent observational campaigns on black holes across the mass scale: the jet break. I will describe new work attempting to understand what drives this feature by searching for regions that share similar trends in terms of dependence on accretion power or magnetisation. Such methods can allow early tests of simulation assumptions and help pinpoint which regions will dominate the light production, well before full radiative processes are incorporated, and will help guide the interpretation of, e.g. Event Horizon Telescope data.

  19. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    Science.gov (United States)

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.

    2015-01-01

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  1. Derivation of the critical effect size/benchmark response for the dose-response analysis of the uptake of radioactive iodine in the human thyroid.

    Science.gov (United States)

    Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A

    2016-08-22

    Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  2. BENCHMARK OF MACHINE LEARNING METHODS FOR CLASSIFICATION OF A SENTINEL-2 IMAGE

    Directory of Open Access Journals (Sweden)

    F. Pirotti

    2016-06-01

    Full Text Available Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels for testing and validating subsets. The classes used are the following: (i urban (ii sowable areas (iii water (iv tree plantations (v grasslands. Validation is carried out using three different approaches: (i using pixels from the training dataset (train, (ii using pixels from the training dataset and applying cross-validation with the k-fold method (kfold and (iii using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train, the whole control dataset (full and with k-fold cross-validation (kfold with ten folds. Results from validation of predictions of the whole dataset (full show the

  3. Classification of criticality calculations with correlation coefficient method and its application to OECD/NEA burnup credit benchmarks phase III-A and II-A

    International Nuclear Information System (INIS)

    Okuno, Hiroshi

    2003-01-01

    A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)

  4. Characterization of the dynamic friction of woven fabrics: Experimental methods and benchmark results

    NARCIS (Netherlands)

    Sachs, Ulrich; Akkerman, Remko; Fetfatsidis, K.; Vidal-Sallé, E.; Schumacher, J.; Ziegmann, G.; Allaoui, S.; Hivet, G.; Maron, B.; Vanclooster, K.; Lomov, S.V.

    2014-01-01

    A benchmark exercise was conducted to compare various friction test set-ups with respect to the measured coefficients of friction. The friction was determined between Twintex®PP, a fabric of commingled yarns of glass and polypropylene filaments, and a metal surface. The same material was supplied to

  5. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR

  6. Benchmarking of EPRI-cell epithermal methods with the point-energy discrete-ordinates code (OZMA)

    International Nuclear Information System (INIS)

    Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.

    1982-01-01

    The purpose of the present study is to benchmark E-C resonance-shielding and cell-averaging methods against a rigorous deterministic solution on a fine-group level (approx. 30 groups between 1 eV and 5.5 keV). The benchmark code used is OZMA, which solves the space-dependent slowing-down equations using continuous-energy discrete ordinates or integral transport theory to produce fine-group cross sections. Results are given for three water-moderated lattices - a mixed oxide, a uranium method, and a tight-pitch high-conversion uranium oxide configuration. The latter two lattices were chosen because of the strong self shielding of the 238 U resonances

  7. Method to stimulate dose gradient in liquid media

    International Nuclear Information System (INIS)

    Scarlat, F.

    1993-01-01

    The depth absorbed dose from electrons with energy higher than 10 MeV shows a distribution with a big-percentage absorbed dose at the entrance surface and a small dose gradient. This is due to the big distance between the virtual focus and irradiated liquid medium. In order to stimulate dose gradient and decrease the surface dose, this paper presents a method for obtaining the second focus by means of a magnetostatic planar wiggler. Preliminary calculations indicated that the absorbed dose rate increases two-three times at the reference plane in the irradiated liquid medium. (Author)

  8. Altered operant responding for motor reinforcement and the determination of benchmark doses following perinatal exposure to low-level 2,3,7,8-tetrachlorodibenzo-p-dioxin.

    Science.gov (United States)

    Markowski, V P; Zareba, G; Stern, S; Cox, C; Weiss, B

    2001-06-01

    Pregnant Holtzman rats were exposed to a single oral dose of 0, 20, 60, or 180 ng/kg 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) on the 18th day of gestation. Their adult female offspring were trained to respond on a lever for brief opportunities to run in specially designed running wheels. Once they had begun responding on a fixed-ratio 1 (FR1) schedule of reinforcement, the fixed-ratio requirement for lever pressing was increased at five-session intervals to values of FR2, FR5, FR10, FR20, and FR30. We examined vaginal cytology after each behavior session to track estrous cyclicity. Under each of the FR values, perinatal TCDD exposure produced a significant dose-related reduction in the number of earned opportunities to run, the lever response rate, and the total number of revolutions in the wheel. Estrous cyclicity was not affected. Because of the consistent dose-response relationship at all FR values, we used the behavioral data to calculate benchmark doses based on displacements from modeled zero-dose performance of 1% (ED(01)) and 10% (ED(10)), as determined by a quadratic fit to the dose-response function. The mean ED(10) benchmark dose for earned run opportunities was 10.13 ng/kg with a 95% lower bound of 5.77 ng/kg. The corresponding ED(01) was 0.98 ng/kg with a 95% lower bound of 0.83 ng/kg. The mean ED(10) for total wheel revolutions was calculated as 7.32 ng/kg with a 95% lower bound of 5.41 ng/kg. The corresponding ED(01) was 0.71 ng/kg with a 95% lower bound of 0.60. These values should be viewed from the perspective of current human body burdens, whose average value, based on TCDD toxic equivalents, has been calculated as 13 ng/kg.

  9. Method of simulating dose reduction for digital radiographic systems

    International Nuclear Information System (INIS)

    Baath, M.; Haakansson, M.; Tingberg, A.; Maansson, L. G.

    2005-01-01

    The optimisation of image quality vs. radiation dose is an important task in medical imaging. To obtain maximum validity of the optimisation, it must be based on clinical images. Images at different dose levels can then either be obtained by collecting patient images at the different dose levels sought to investigate - including additional exposures and permission from an ethical committee - or by manipulating images to simulate different dose levels. The aim of the present work was to develop a method of simulating dose reduction for digital radiographic systems. The method uses information about the detective quantum efficiency and noise power spectrum at the original and simulated dose levels to create an image containing filtered noise. When added to the original image this results in an image with noise which, in terms of frequency content, agrees with the noise present in an image collected at the simulated dose level. To increase the validity, the method takes local dose variations in the original image into account. The method was tested on a computed radiography system and was shown to produce images with noise behaviour similar to that of images actually collected at the simulated dose levels. The method can, therefore, be used to modify an image collected at one dose level so that it simulates an image of the same object collected at any lower dose level. (authors)

  10. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    International Nuclear Information System (INIS)

    Norris, Edward T.; Liu, Xin; Hsieh, Jiang

    2015-01-01

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer

  11. A convolution method for predicting mean treatment dose including organ motion at imaging

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2000-01-01

    Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine

  12. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  13. Two NEA sensitivity, 1-D benchmark calculations. Part I: Sensitivity of the dose rate at the outside of a PWR configuration and of the vessel damage

    International Nuclear Information System (INIS)

    Canali, U.; Gonano, G.; Nicks, R.

    1978-01-01

    Within the framework of the coordinated programme of sensitivity analysis studies, the reactor shielding benchmark calculation concerning the shield of a typical Pressurized Water Reactor, as proposed by I.K.E. (Stuttgart) and K.W.U. (Erlangen) has been performed. The direct and adjoint fluxes were calculated using ANISN, the cross-section sensitivity using SWANLAKE. The cross-section library used was EL4, 100 neutron + 19 gamma groups. The following quantities were of interest: neutron damage in the pressure vessel; dose rate outside the concrete shield. SWANLAKE was used to calculate the sensitivity of the above mentioned results to variations in the density of each nuclide present. The contributions of the different cross-section Legendre components are also given. Sensitivity profiles indicate the energy ranges in which a cross-section variation has a greater influence on the results. (author)

  14. Application of benchmark dose modeling to protein expression data in the development and analysis of mode of action/adverse outcome pathways for testicular toxicity.

    Science.gov (United States)

    Chepelev, Nikolai L; Meek, M E Bette; Yauk, Carole Lyn

    2014-11-01

    Reliable quantification of gene and protein expression has potential to contribute significantly to the characterization of hypothesized modes of action (MOA) or adverse outcome pathways for critical effects of toxicants. Quantitative analysis of gene expression by benchmark dose (BMD) modeling has been facilitated by the development of effective software tools. In contrast, protein expression is still generally quantified by a less robust effect level (no or lowest [adverse] effect levels) approach, which minimizes its potential utility in the consideration of dose-response and temporal concordance for key events in hypothesized MOAs. BMD modeling is applied here to toxicological data on testicular toxicity to investigate its potential utility in analyzing protein expression relevant to the proposed MOA to inform human health risk assessment. The results illustrate how the BMD analysis of protein expression in animal tissues in response to toxicant exposure: (1) complements other toxicity data, and (2) contributes to consideration of the empirical concordance of dose-response relationships, as part of the weight of evidence for hypothesized MOAs to facilitate consideration and application in regulatory risk assessment. Lack of BMD analysis in proteomics has likely limited its use for these purposes. This paper illustrates the added value of BMD modeling to support and strengthen hypothetical MOAs as a basis to facilitate the translation and uptake of the results of proteomic research into risk assessment. Copyright © 2014 Her Majesty the Queen in Right of Canada. Journal of Applied Toxicology © 2014 John Wiley & Sons, Ltd.

  15. Evaluating the Resilience of the Bottom-up Method used to Detect and Benchmark the Smartness of University Campuses

    DEFF Research Database (Denmark)

    Giovannella, Carlo; Andone, Diana; Dascalu, Mihai

    2016-01-01

    A new method to perform a bottom-up extraction and benchmark of the perceived multilevel smartness of complex ecosystems has been recently described and applied to territories and learning ecosystems like university campuses and schools. In this paper we study the resilience of our method...... by comparing and integrating the data collected in several European Campuses during two different academic years, 2014-15 and 2015-16. The overall results are: a) a more adequate and robust definition of the orthogonal multidimensional space of representation of the smartness, and b) the definition...

  16. Benchmark Dose Modeling Estimates of the Concentrations of Inorganic Arsenic That Induce Changes to the Neonatal Transcriptome, Proteome, and Epigenome in a Pregnancy Cohort.

    Science.gov (United States)

    Rager, Julia E; Auerbach, Scott S; Chappell, Grace A; Martin, Elizabeth; Thompson, Chad M; Fry, Rebecca C

    2017-10-16

    Prenatal inorganic arsenic (iAs) exposure influences the expression of critical genes and proteins associated with adverse outcomes in newborns, in part through epigenetic mediators. The doses at which these genomic and epigenomic changes occur have yet to be evaluated in the context of dose-response modeling. The goal of the present study was to estimate iAs doses that correspond to changes in transcriptomic, proteomic, epigenomic, and integrated multi-omic signatures in human cord blood through benchmark dose (BMD) modeling. Genome-wide DNA methylation, microRNA expression, mRNA expression, and protein expression levels in cord blood were modeled against total urinary arsenic (U-tAs) levels from pregnant women exposed to varying levels of iAs. Dose-response relationships were modeled in BMDExpress, and BMDs representing 10% response levels were estimated. Overall, DNA methylation changes were estimated to occur at lower exposure concentrations in comparison to other molecular endpoints. Multi-omic module eigengenes were derived through weighted gene co-expression network analysis, representing co-modulated signatures across transcriptomic, proteomic, and epigenomic profiles. One module eigengene was associated with decreased gestational age occurring alongside increased iAs exposure. Genes/proteins within this module eigengene showed enrichment for organismal development, including potassium voltage-gated channel subfamily Q member 1 (KCNQ1), an imprinted gene showing differential methylation and expression in response to iAs. Modeling of this prioritized multi-omic module eigengene resulted in a BMD(BMDL) of 58(45) μg/L U-tAs, which was estimated to correspond to drinking water arsenic concentrations of 51(40) μg/L. Results are in line with epidemiological evidence supporting effects of prenatal iAs occurring at levels iAs exposure influences neonatal outcome-relevant transcriptomic, proteomic, and epigenomic profiles.

  17. RESULTS OF ANALYSIS OF BENCHMARKING METHODS OF INNOVATION SYSTEMS ASSESSMENT IN ACCORDANCE WITH AIMS OF SUSTAINABLE DEVELOPMENT OF SOCIETY

    Directory of Open Access Journals (Sweden)

    A. Vylegzhanina

    2016-01-01

    Full Text Available In this work, we introduce results of comparative analysis of international ratings indexes of innovation systems for their compliance with purposes of sustainable development. Purpose of this research is defining requirements to benchmarking methods of assessing national or regional innovation systems and compare them basing on assumption, that innovation system is aligned with sustainable development concept. Analysis of goal sets and concepts, which underlie observed international composite innovation indexes, comparison of their metrics and calculation techniques, allowed us to reveal opportunities and limitations of using these methods in frames of sustainable development concept. We formulated targets of innovation development on the base of innovation priorities of sustainable socio-economic development. Using comparative analysis of indexes with these targets, we revealed two methods of assessing innovation systems, maximally connected with goals of sustainable development. Nevertheless, today no any benchmarking method, which meets need of innovation systems assessing in compliance with sustainable development concept to a sufficient extent. We suggested practical directions of developing methods, assessing innovation systems in compliance with goals of societal sustainable development.

  18. The Grad-Shafranov Reconstruction of Toroidal Magnetic Flux Ropes: Method Development and Benchmark Studies

    Science.gov (United States)

    Hu, Qiang

    2017-09-01

    We develop an approach of the Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium, as widely applied in fusion plasmas. The geometry is such that the arbitrary cross-section of the torus has rotational symmetry about the rotation axis, Z, with a major radius, r0. The magnetic field configuration is thus determined by a scalar flux function, Ψ, and a functional F that is a single-variable function of Ψ. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional F(Ψ) to determine an optimal Z-axis orientation, and ii) for the chosen Z, a χ2 minimization process resulting in a range of r0. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedure and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in Z and r0 are 9° and 22%, respectively, and the relative percent error in the numerical GS solutions is smaller than 10%. We also make public the computer codes for these implementations and benchmark studies.

  19. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  20. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  1. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  2. An efficient dose-compensation method for proximity effect correction

    International Nuclear Information System (INIS)

    Wang Ying; Han Weihua; Yang Xiang; Zhang Yang; Yang Fuhua; Zhang Renping

    2010-01-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved. (semiconductor technology)

  3. Methods to stimulate national and sub-national benchmarking through international health system performance comparisons: a Canadian approach.

    Science.gov (United States)

    Veillard, Jeremy; Moses McKeag, Alexandra; Tipper, Brenda; Krylova, Olga; Reason, Ben

    2013-09-01

    This paper presents, discusses and evaluates methods used by the Canadian Institute for Health Information to present health system performance international comparisons in ways that facilitate their understanding by the public and health system policy-makers and can stimulate performance benchmarking. We used statistical techniques to normalize the results and present them on a standardized scale facilitating understanding of results. We compared results to the OECD average, and to benchmarks. We also applied various data quality rules to ensure the validity of results. In order to evaluate the impact of the public release of these results, we used quantitative and qualitative methods and documented other types of impact. We were able to present results for performance indicators and dimensions at national and sub-national levels; develop performance profiles for each Canadian province; and show pan-Canadian performance patterns for specific performance indicators. The results attracted significant media attention at national level and reactions from various stakeholders. Other impacts such as requests for additional analysis and improvement in data timeliness were observed. The methods used seemed attractive to various audiences in the Canadian context and achieved the objectives originally defined. These methods could be refined and applied in different contexts. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  4. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  5. Impact of Genomics Platform and Statistical Filtering on Transcriptional Benchmark Doses (BMD and Multiple Approaches for Selection of Chemical Point of Departure (PoD.

    Directory of Open Access Journals (Sweden)

    A Francina Webster

    Full Text Available Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD values derived from toxicogenomics data be used as point of departure (PoD values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd and carcinogenic (4, 8 mkd doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses.

  6. Dose Escalation Methods in Phase I Cancer Clinical Trials

    OpenAIRE

    Le Tourneau, Christophe; Lee, J. Jack; Siu, Lillian L.

    2009-01-01

    Phase I clinical trials are an essential step in the development of anticancer drugs. The main goal of these studies is to establish the recommended dose and/or schedule of new drugs or drug combinations for phase II trials. The guiding principle for dose escalation in phase I trials is to avoid exposing too many patients to subtherapeutic doses while preserving safety and maintaining rapid accrual. Here we review dose escalation methods for phase I trials, including the rule-based and model-...

  7. A NRC-BNL benchmark evaluation of seismic analysis methods for non-classically damped coupled systems

    International Nuclear Information System (INIS)

    Xu, J.; DeGrassi, G.; Chokshi, N.

    2004-01-01

    Under the auspices of the U.S. Nuclear Regulatory Commission (NRC), Brookhaven National Laboratory (BNL) developed a comprehensive program to evaluate state-of-the-art methods and computer programs for seismic analysis of typical coupled nuclear power plant (NPP) systems with non-classical damping. In this program, four benchmark models of coupled building-piping/equipment systems with different damping characteristics were developed and analyzed by BNL for a suite of earthquakes. The BNL analysis was carried out by the Wilson-θ time domain integration method with the system-damping matrix computed using a synthesis formulation as presented in a companion paper [Nucl. Eng. Des. (2002)]. These benchmark problems were subsequently distributed to and analyzed by program participants applying their uniquely developed methods and computer programs. This paper is intended to offer a glimpse at the program, and provide a summary of major findings and principle conclusions with some representative results. The participant's analysis results established using complex modal time history methods showed good comparison with the BNL solutions, while the analyses produced with either complex-mode response spectrum methods or classical normal-mode response spectrum method, in general, produced more conservative results, when averaged over a suite of earthquakes. However, when coupling due to damping is significant, complex-mode response spectrum methods performed better than the classical normal-mode response spectrum method. Furthermore, as part of the program objectives, a parametric assessment is also presented in this paper, aimed at evaluation of the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled NPP systems. It is believed that the findings and insights learned from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving license

  8. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  9. The lead cooled fast reactor benchmark Brest-300: analysis with sensitivity method

    International Nuclear Information System (INIS)

    Smirnov, V.; Orlov, V.; Mourogov, A.; Lecarpentier, D.; Ivanova, T.

    2005-01-01

    Lead cooled fast neutrons reactor is one of the most interesting candidates for the development of atomic energy. BREST-300 is a 300 MWe lead cooled fast reactor developed by the NIKIET (Russia) with a deterministic safety approach which aims to exclude reactivity margins greater than the delayed neutron fraction. The development of innovative reactors (lead coolant, nitride fuel...) and fuel cycles with new constraints such as cycle closure or actinide burning, requires new technologies and new nuclear data. In this connection, the tool and neutron data used for the calculational analysis of reactor characteristics requires thorough validation. NIKIET developed a reactor benchmark fitting of design type calculational tools (including neutron data). In the frame of technical exchanges between NIKIET and EDF (France), results of this benchmark calculation concerning the principal parameters of fuel evolution and safety parameters has been inter-compared, in order to estimate the uncertainties and validate the codes for calculations of this new kind of reactors. Different codes and cross-sections data have been used, and sensitivity studies have been performed to understand and quantify the uncertainties sources.The comparison of results shows that the difference on k eff value between ERANOS code with ERALIB1 library and the reference is of the same order of magnitude than the delayed neutron fraction. On the other hand, the discrepancy is more than twice bigger if JEF2.2 library is used with ERANOS. Analysis of discrepancies in calculation results reveals that the main effect is provided by the difference of nuclear data, namely U 238 , Pu 239 fission and capture cross sections and lead inelastic cross sections

  10. Bacterial whole genome-based phylogeny: construction of a new benchmarking dataset and assessment of some existing methods.

    Science.gov (United States)

    Ahrenfeldt, Johanne; Skaarup, Carina; Hasman, Henrik; Pedersen, Anders Gorm; Aarestrup, Frank Møller; Lund, Ole

    2017-01-05

    Whole genome sequencing (WGS) is increasingly used in diagnostics and surveillance of infectious diseases. A major application for WGS is to use the data for identifying outbreak clusters, and there is therefore a need for methods that can accurately and efficiently infer phylogenies from sequencing reads. In the present study we describe a new dataset that we have created for the purpose of benchmarking such WGS-based methods for epidemiological data, and also present an analysis where we use the data to compare the performance of some current methods. Our aim was to create a benchmark data set that mimics sequencing data of the sort that might be collected during an outbreak of an infectious disease. This was achieved by letting an E. coli hypermutator strain grow in the lab for 8 consecutive days, each day splitting the culture in two while also collecting samples for sequencing. The result is a data set consisting of 101 whole genome sequences with known phylogenetic relationship. Among the sequenced samples 51 correspond to internal nodes in the phylogeny because they are ancestral, while the remaining 50 correspond to leaves. We also used the newly created data set to compare three different online available methods that infer phylogenies from whole-genome sequencing reads: NDtree, CSI Phylogeny and REALPHY. One complication when comparing the output of these methods with the known phylogeny is that phylogenetic methods typically build trees where all observed sequences are placed as leafs, even though some of them are in fact ancestral. We therefore devised a method for post processing the inferred trees by collapsing short branches (thus relocating some leafs to internal nodes), and also present two new measures of tree similarity that takes into account the identity of both internal and leaf nodes. Based on this analysis we find that, among the investigated methods, CSI Phylogeny had the best performance, correctly identifying 73% of all branches in the

  11. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    -tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.

  12. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  13. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  14. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  15. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  16. Dose escalation methods in phase I cancer clinical trials.

    Science.gov (United States)

    Le Tourneau, Christophe; Lee, J Jack; Siu, Lillian L

    2009-05-20

    Phase I clinical trials are an essential step in the development of anticancer drugs. The main goal of these studies is to establish the recommended dose and/or schedule of new drugs or drug combinations for phase II trials. The guiding principle for dose escalation in phase I trials is to avoid exposing too many patients to subtherapeutic doses while preserving safety and maintaining rapid accrual. Here we review dose escalation methods for phase I trials, including the rule-based and model-based dose escalation methods that have been developed to evaluate new anticancer agents. Toxicity has traditionally been the primary endpoint for phase I trials involving cytotoxic agents. However, with the emergence of molecularly targeted anticancer agents, potential alternative endpoints to delineate optimal biological activity, such as plasma drug concentration and target inhibition in tumor or surrogate tissues, have been proposed along with new trial designs. We also describe specific methods for drug combinations as well as methods that use a time-to-event endpoint or both toxicity and efficacy as endpoints. Finally, we present the advantages and drawbacks of the various dose escalation methods and discuss specific applications of the methods in developmental oncotherapeutics.

  17. Antipsychotic dose equivalents and dose-years: a standardized method for comparing exposure to different drugs.

    Science.gov (United States)

    Andreasen, Nancy C; Pressler, Marcus; Nopoulos, Peg; Miller, Del; Ho, Beng-Choon

    2010-02-01

    A standardized quantitative method for comparing dosages of different drugs is a useful tool for designing clinical trials and for examining the effects of long-term medication side effects such as tardive dyskinesia. Such a method requires establishing dose equivalents. An expert consensus group has published charts of equivalent doses for various antipsychotic medications for first- and second-generation medications. These charts were used in this study. Regression was used to compare each drug in the experts' charts to chlorpromazine and haloperidol and to create formulas for each relationship. The formulas were solved for chlorpromazine 100 mg and haloperidol 2 mg to derive new chlorpromazine and haloperidol equivalents. The formulas were incorporated into our definition of dose-years such that 100 mg/day of chlorpromazine equivalent or 2 mg/day of haloperidol equivalent taken for 1 year is equal to one dose-year. All comparisons to chlorpromazine and haloperidol were highly linear with R(2) values greater than .9. A power transformation further improved linearity. By deriving a unique formula that converts doses to chlorpromazine or haloperidol equivalents, we can compare otherwise dissimilar drugs. These equivalents can be multiplied by the time an individual has been on a given dose to derive a cumulative value measured in dose-years in the form of (chlorpromazine equivalent in mg) x (time on dose measured in years). After each dose has been converted to dose-years, the results can be summed to provide a cumulative quantitative measure of lifetime exposure. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  18. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  19. A systematic benchmark method for analysis and comparison of IMRT treatment planning algorithms.

    Science.gov (United States)

    Mayo, Charles S; Urie, Marcia M

    2003-01-01

    Tools and procedures for evaluating and comparing different intensity-modulated radiation therapy (IMRT) systems are presented. IMRT is increasingly in demand and there are numerous systems available commercially. These programs introduce significantly different software to dosimetrists and physicists than conventional planning systems, and the options often seem initially overwhelmingly complex to the user. By creating geometric target volumes and critical normal tissues, the characteristics of the algorithms may be investigated, and the influence of the different parameters explored. Overall optimization strategies of the algorithm may be characterized by treating a square target volume (TV) with 2 perpendicular beams, with and without heterogeneities. A half-donut (hemi-annulus) TV with a "donut hole" (central cylinder) critical normal tissue (CNT) on a CT of a simulated quality assurance phantom is suggested as a good geometry to explore the IMRT algorithm parameters. Using this geometry, the order of varying parameters is suggested. First is to determine the effects of the number of stratifications of optimized intensity fluence on the resulting dose distribution, and selecting a fixed number of stratifications for further studies. To characterize the dose distributions, a dose-homogeneity index (DHI) is defined as the ratio of the dose received by 90% of the volume to the minimum dose received by the "hottest" 10% of the volume. The next step is to explore the effects of priority and penalty on both the TV and the CNT. Then, choosing and fixing these parameters, the effects of varying the number of beams can be looked at. As well as evaluating the dose distributions (and DHI), the number of subfields and the number of monitor units required for different numbers of stratifications and beams can be evaluated.

  20. Method and apparatus for determining the dose value of neutrons

    International Nuclear Information System (INIS)

    Burgkhardt, B.; Piesch, E.

    1976-01-01

    A method is provided for determining the dose value of neutrons leaving a body as thermal and intermediate neutrons after having been scattered in the body. A first dose value of thermal and intermediate neutrons is detected on the surface of the body by means of a first detector for neutrons which is shielded against thermal and intermediate neutrons not emerging from the body. A second detector is used to measure a second dose value of the thermal and intermediate neutrons not emerging from the body. A first correction factor based on the first and second values is obtained from a calibration diagram and is applied to the first dose value to determine a first corrected first dose value. 21 Claims, 6 Drawing Figures

  1. The experimental method for neutron dose-equivalent detection

    International Nuclear Information System (INIS)

    Ji Changsong

    1992-01-01

    A new method, for getting neutron dose-equivalent Cd rode absorption method is described. The method adopts Cd-rode-swarm buck absorption, which greatly improved the neutron sensitivity and simplified the adjustment method. By this method, the author has developed BH3105 model neutron dose equivalent meter, the sensitivity of this instrument reach 10 cps/μSvh -1 . γ-ray depression rate reaches 4000:1, the measurement range is 0.1 μSv/h-10 6 μSv/h. The energy response is good (from thermal neutron-14 MeV neutron), this instrument can be used to measure the dose equivalent of the neutron areas

  2. Manual method for dose calculation in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    Vianello, Elizabeth A.; Almeida, Carlos E. de; Biaggio, Maria F. de

    1998-01-01

    This paper describes a manual method for dose calculation in brachytherapy of gynecological tumors, which allows the calculation of the doses at any plane or point of clinical interest. This method uses basic principles of vectorial algebra and the simulating orthogonal films taken from the patient with the applicators and dummy sources in place. The results obtained with method were compared with the values calculated with the values calculated with the treatment planning system model Theraplan and the agreement was better than 5% in most cases. The critical points associated with the final accuracy of the proposed method is related to the quality of the image and the appropriate selection of the magnification factors. This method is strongly recommended to the radiation oncology centers where are no treatment planning systems available and the dose calculations are manually done. (author)

  3. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  4. Improvement of dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    Onda, Takashi; Yoshida, Yoshitaka; Kudo, Seiichi; Nishimura, Kazuya

    2003-01-01

    It is expected that the selection of access routes for employees who engage in emergency work at a severe accident in a nuclear power plant makes a difference in their radiation dose values. In order to examine how much difference arises in the dose by the selection of the access routes, in the case of a severe accident in a pressurized water reactor plant, we improved the method to obtain the dose for employees and expanded the analyzing system. By the expansion of the system and the improvement of the method, we have realized the followings: (1) in the whole plant area, the dose evaluation is possible, (2) the efficiency of calculation is increased by the reduction of the number of radiation sources, etc, and (3) the function is improved by introduction of the sky shine calculation into the highest floor, etc. The improved system clarifies the followings: (1) the doses change by selected access routes, and this system can give the difference in the doses quantitatively, and (2) in order to suppress the dose, it is effective to choose the most adequate access route for the employees. (author)

  5. Method for dose calculation in intracavitary irradiation of endometrical carcinoma

    International Nuclear Information System (INIS)

    Zevrieva, I.F.; Ivashchenko, N.T.; Musapirova, N.A.; Fel'dman, S.Z.; Sajbekov, T.S.

    1979-01-01

    A method for dose calculation for the conditions of intracavitary gamma therapy of endometrial carcinoma using spherical and linear 60 Co sources was elaborated. Calculations of dose rates for different amount and orientation of spherical radiation sources and for different planes were made with the aid of BEhSM-4M computer. Dosimet were made with the aid of BEhSM-4M computer. Dosimetric study of dose fields was made using a phantom imitating the real conditions of irradiation. Discrepancies between experimental and calculated values are within the limits of the experiment accuracy

  6. Measurement of annual dose on porcelain using surface TLD method

    International Nuclear Information System (INIS)

    Xia Junding; Wang Weida; Leung, P.L.

    2001-01-01

    In order to improve accuracy of TL authentication test for porcelain, a method of measurement of annual dose using ultrathin (CaSO 4 :Tm) dosage layer on porcelain was studied. The TLD was placed on the part of porcelain without glaze. A comparison of measurement of annual dose for surface TLD, inside TLD and alpha counting on porcelain was made. The results show that this technique is suitable for measuring annual dose and improving accuracy of TL authentication test for both porcelain and pottery

  7. Evaluation of Different Methods for Identification of Structural Alerts Using Chemical Ames Mutagenicity Data Set as a Benchmark.

    Science.gov (United States)

    Yang, Hongbin; Li, Jie; Wu, Zengrui; Li, Weihua; Liu, Guixia; Tang, Yun

    2017-06-19

    Identification of structural alerts for toxicity is useful in drug discovery and other fields such as environmental protection. With structural alerts, researchers can quickly identify potential toxic compounds and learn how to modify them. Hence, it is important to determine structural alerts from a large number of compounds quickly and accurately. There are already many methods reported for identification of structural alerts. However, how to evaluate those methods is a problem. In this paper, we tried to evaluate four of the methods for monosubstructure identification with three indices including accuracy rate, coverage rate, and information gain to compare their advantages and disadvantages. The Kazius' Ames mutagenicity data set was used as the benchmark, and the four methods were MoSS (graph-based), SARpy (fragment-based), and two fingerprint-based methods including Bioalerts and the fingerprint (FP) method we previously used. The results showed that Bioalerts and FP could detect key substructures with high accuracy and coverage rates because they allowed unclosed rings and wildcard atom or bond types. However, they also resulted in redundancy so that their predictive performance was not as good as that of SARpy. SARpy was competitive in predictive performance in both training set and external validation set. These results might be helpful for users to select appropriate methods and further development of methods for identification of structural alerts.

  8. Epidemiological methods for assessing dose-response and dose-effect relationships

    DEFF Research Database (Denmark)

    Kjellström, Tord; Grandjean, Philippe

    2007-01-01

    Selected Molecular Mechanisms of Metal Toxicity and Carcinogenicity General Considerations of Dose-Effect and Dose-Response Relationships Interactions in Metal Toxicology Epidemiological Methods for Assessing Dose-Response and Dose-Effect Relationships Essential Metals: Assessing Risks from Deficiency......Description Handbook of the Toxicology of Metals is the standard reference work for physicians, toxicologists and engineers in the field of environmental and occupational health. This new edition is a comprehensive review of the effects on biological systems from metallic elements...... access to a broad range of basic toxicological data and also gives a general introduction to the toxicology of metallic compounds. Audience Toxicologists, physicians, and engineers in the fields of environmental and occupational health as well as libraries in these disciplines. Will also be a useful...

  9. Calculation method for gamma-dose rates from spherical puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1993-05-01

    The Lagrangian puff-models are widely used for calculation of the dispersion of atmospheric releases. Basic output from such models are concentrations of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on semi-infinite cloud model. This method is however only applicable for points far away from the release point. The exact calculation of the cloud dose using the volume integral requires significant computer time. The volume integral for the gamma dose could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor due to the fact that the same correction factors are used for all isotopes. The authors describe a more elaborate correction method. This method uses precalculated values of the gamma-dose rate as a function of the puff dispersion parameter (δ p ) and the distance from the puff centre for four energy groups. The release of energy for each radionuclide in each energy group has been calculated and tabulated. Based on these tables and a suitable interpolation procedure the calculation of gamma doses takes very short time and is almost independent of the number of radionuclides. (au) (7 tabs., 7 ills., 12 refs.)

  10. Two gamma dose evaluation methods for silicon semiconductor detector

    International Nuclear Information System (INIS)

    Chen Faguo; Jin Gen; Yang Yapeng; Xu Yuan

    2011-01-01

    Silicon PIN diodes have been widely used as personal and areal dosimeters because of their small volume, simplicity and real-time operation. However, because silicon is neither a tissue-equivalent nor an air-equivalent material, an intrinsic disadvantage for silicon dosimeters is that a significant over-response occurs at low-energy region, especially below 200 keV. Using a energy compensation filter to flatten the energy response is one method overcoming this disadvantage. But for dose compensation method, the estimated dose depends only on the number of the detector pulses. So a weight function method was introduced to evaluate gamma dose, which depends on pulse number as well as its amplitude. (authors)

  11. A unique manual method for emergency offsite dose calculations

    International Nuclear Information System (INIS)

    Wildner, T.E.; Carson, B.H.; Shank, K.E.

    1987-01-01

    This paper describes a manual method developed for performance of emergency offsite dose calculations for PP and L's Susquehanna Steam Electric Station. The method is based on a three-part carbonless form. The front page guides the user through selection of the appropriate accident case and inclusion of meteorological and effluent data data. By circling the applicable accident descriptors, the user circles the dose factors on pages 2 and 3 which are then simply multiplied to yield the whole body and thyroid dose rates at the plant boundary, two, five, and ten miles. The process used to generate the worksheet is discussed, including the method used to incorporate the observed terrain effects on airflow patterns caused by the Susquehanna River Valley topography

  12. A Method for Correcting IMRT Optimizer Heterogeneity Dose Calculations

    International Nuclear Information System (INIS)

    Zacarias, Albert S.; Brown, Mellonie F.; Mills, Michael D.

    2010-01-01

    Radiation therapy treatment planning for volumes close to the patient's surface, in lung tissue and in the head and neck region, can be challenging for the planning system optimizer because of the complexity of the treatment and protected volumes, as well as striking heterogeneity corrections. Because it is often the goal of the planner to produce an isodose plan with uniform dose throughout the planning target volume (PTV), there is a need for improved planning optimization procedures for PTVs located in these anatomical regions. To illustrate such an improved procedure, we present a treatment planning case of a patient with a lung lesion located in the posterior right lung. The intensity-modulated radiation therapy (IMRT) plan generated using standard optimization procedures produced substantial dose nonuniformity across the tumor caused by the effect of lung tissue surrounding the tumor. We demonstrate a novel iterative method of dose correction performed on the initial IMRT plan to produce a more uniform dose distribution within the PTV. This optimization method corrected for the dose missing on the periphery of the PTV and reduced the maximum dose on the PTV to 106% from 120% on the representative IMRT plan.

  13. Cross section and method uncertainties: the application of sensitivity analysis to study their relationship in radiation transport benchmark problems

    International Nuclear Information System (INIS)

    Weisbi, C.R.; Oblow, E.M.; Ching, J.; White, J.E.; Wright, R.Q.; Drischler, J.

    1975-08-01

    Sensitivity analysis is applied to the study of an air transport benchmark calculation to quantify and distinguish between cross-section and method uncertainties. The boundary detector response was converged with respect to spatial and angular mesh size, P/sub l/ expansion of the scattering kernel, and the number and location of energy grid boundaries. The uncertainty in the detector response due to uncertainties in nuclear data is 17.0 percent (one standard deviation, not including uncertainties in energy and angular distribution) based upon the ENDF/B-IV ''error files'' including correlations in energy and reaction type. Differences of approximately 6 percent can be attributed exclusively to differences in processing multigroup transfer matrices. Formal documentation of the PUFF computer program for the generation of multigroup covariance matrices is presented. (47 figures, 14 tables) (U.S.)

  14. Comparison of the dose evaluation methods for criticality accident

    International Nuclear Information System (INIS)

    Shimizu, Yoshio; Oka, Tsutomu

    2004-01-01

    The improvement of the dose evaluation method for criticality accidents is important to rationalize design of the nuclear fuel cycle facilities. The source spectrums of neutron and gamma ray of a criticality accident depend on the condition of the source, its materials, moderation, density and so on. The comparison of the dose evaluation methods for a criticality accident is made. Some methods, which are combination of criticality calculation and shielding calculation, are proposed. Prompt neutron and gamma ray doses from nuclear criticality of some uranium systems have been evaluated as the Nuclear Criticality Slide Rule. The uranium metal source (unmoderated system) and the uranyl nitrate solution source (moderated system) in the rule are evaluated by some calculation methods, which are combinations of code and cross section library, as follows: (a) SAS1X (ENDF/B-IV), (b) MCNP4C (ENDF/B-VI)-ANISN (DLC23E or JSD120), (c) MCNP4C-MCNP4C (ENDF/B-VI). They have consisted of criticality calculation and shielding calculation. These calculation methods are compared about the tissue absorbed dose and the spectrums at 2 m from the source. (author)

  15. Estimation of absorbed doses on the basis of cytogenetic methods

    International Nuclear Information System (INIS)

    Shevchenko, V.A.; Rubanovich, A.V.; Snigiryova, G.P.

    1998-01-01

    Long-term studies in the field of radiation cytogenetics have resulted in the discovery of relationship between induction of chromosome aberrations and the type of ionizing radiation, their intensity and dose. This has served as a basis of biological dosimetry as an area of application of the revealed relationship, and has been used in the practice to estimate absorbed doses in people exposed to emergency irradiation. The necessity of using the methods of biological dosimetry became most pressing in connection with the Chernobyl accident in 1986, as well as in connection with other radiation situations that occurred in nuclear industry of the former USSR. The materials presented in our works demonstrate the possibility of applying cytogenetic methods for assessing absorbed doses in populations of different regions exposed to radiation as a result of accidents at nuclear facilities (Chernobyl, the village Muslymovo on the Techa river, the Three Mile Island nuclear power station in the USA where an accident occurred in 1979). Fundamentally, new possibilities for retrospective dose assessment are provided by the FISH-method that permits the assessment of absorbed doses after several decades since the exposure occurred. In addition, the application of this method makes it possible to restore the dynamics of unstable chromosome aberrations (dicentrics and centric rings), which is important for further improvement of the method of biological dosimetry based on the analysis of unstable chromosome aberrations. The purpose of our presentation is a brief description of the cytogenetic methods used in biological dosimetry, consideration of statistical methods of data analysis and a description of concrete examples of their application. (J.P.N.)

  16. Benchmarking of methods for identification of antimicrobial resistance genes in bacterial whole genome data

    DEFF Research Database (Denmark)

    Clausen, Philip T. L. C.; Zankari, Ea; Aarestrup, Frank Møller

    2016-01-01

    to two different methods in current use for identification of antibiotic resistance genes in bacterial WGS data. A novel method, KmerResistance, which examines the co-occurrence of k-mers between the WGS data and a database of resistance genes, was developed. The performance of this method was compared...... with two previously described methods; ResFinder and SRST2, which use an assembly/BLAST method and BWA, respectively, using two datasets with a total of 339 isolates, covering five species, originating from the Oxford University Hospitals NHS Trust and Danish pig farms. The predicted resistance...... was compared with the observed phenotypes for all isolates. To challenge further the sensitivity of the in silico methods, the datasets were also down-sampled to 1% of the reads and reanalysed. The best results were obtained by identification of resistance genes by mapping directly against the raw reads...

  17. Dose measurement method suitable for management of food irradiation

    International Nuclear Information System (INIS)

    Tanaka, Ryuichi

    1990-01-01

    The report describes major features of dose measurement performed for the management of food irradiation processes, and dose measuring methods suitable for this purpose, and outlines some activities for establishing international standards for dose measurement. Traceability studies made recently are also reviewed. Compared with the sterilization of medical materials, food irradiation is different in some major points from a viewpoint of dose measurement: foods can undergo significant changes in bulk density, depending on its properties, during irradiation, and the variation in the uniformity of bulk density can be large within an irradiation unit and among different units. An accurate dosimeter and well-established traceability are essential for food irradiation control, and basically a dosimeter should be high in reproducibility and stable in dose response, and should be easy to readjust for eliminating systematic errors. A new type of dosimeter was developed recently, in which ESR is used to measure the free radicals generated by radiations in crystals of alanine, an amino acid. Standardization of large dose measurement procedures has been carried out by committee E10 set up under ASTM. (N.K.)

  18. Calculation method for gamma dose rates from Gaussian puffs

    Energy Technology Data Exchange (ETDEWEB)

    Thykier-Nielsen, S; Deme, S; Lang, E

    1995-06-01

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of E{sub {gamma}}, {sigma}{sub y}, the asymmetry factor - {sigma}{sub y}/{sigma}{sub z}, the height of puff center - H and the distance from puff center R{sub xy}. To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs.

  19. Calculation method for gamma dose rates from Gaussian puffs

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.; Deme, S.; Lang, E.

    1995-06-01

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of E γ , σ y , the asymmetry factor - σ y /σ z , the height of puff center - H and the distance from puff center R xy . To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs

  20. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  1. Methods for calculating population dose from atmospheric dispersion of radioactivity

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, B L; Jow, H N; Lee, I S [Pittsburgh Univ., PA (USA)

    1978-06-01

    Curves are computed from which population dose (man-rem) due to dispersal of radioactivity from a point source can be calculated in the gaussian plume model by simple multiplication, and methods of using them and their limitations are considered. Illustrative examples are presented.

  2. Benchmarking and application of the state-of-the-art uncertainty analysis methods XSUSA and SHARK-X

    International Nuclear Information System (INIS)

    Aures, A.; Bostelmann, F.; Hursin, M.; Leray, O.

    2017-01-01

    Highlights: • Application of the uncertainty analysis methods XSUSA and SHARK-X. • Propagation of nuclear data uncertainty through PWR pin cell depletion calculation. • Uncertainty quantification of eigenvalue, nuclide densities and Doppler coefficient. • Top contributor to overall output uncertainty by sensitivity analysis. • Comparison with SAMPLER and TSUNAMI of the SCALE code package. - Abstract: This study presents collaborative work performed between GRS and PSI on benchmarking and application of the state-of-the-art uncertainty analysis methods XSUSA and SHARK-X. Applied to a PWR pin cell depletion calculation, both methods propagate input uncertainty from nuclear data to output uncertainty. The uncertainty of the multiplication factors, nuclide densities, and fuel temperature coefficients derived by both methods are compared at various burnup steps. Comparisons of these quantities are furthermore performed with the SAMPLER module of SCALE 6.2. The perturbation theory based TSUNAMI module of both SCALE 6.1 and SCALE 6.2 is additionally applied for comparisons of the reactivity coefficient.

  3. Method to account for dose fractionation in analysis of IMRT plans: Modified equivalent uniform dose

    International Nuclear Information System (INIS)

    Park, Clinton S.; Kim, Yongbok; Lee, Nancy; Bucci, Kara M.; Quivey, Jeanne M.; Verhey, Lynn J.; Xia Ping

    2005-01-01

    Purpose: To propose a modified equivalent uniform dose (mEUD) to account for dose fractionation using the biologically effective dose without losing the advantages of the generalized equivalent uniform dose (gEUD) and to report the calculated mEUD and gEUD in clinically used intensity-modulated radiotherapy (IMRT) plans. Methods and Materials: The proposed mEUD replaces the dose to each voxel in the gEUD formulation by a biologically effective dose with a normalization factor. We propose to use the term mEUD D o /n o that includes the total dose (D o ) and number of fractions (n o ) and to use the term mEUD o that includes the same total dose but a standard fraction size of 2 Gy. A total of 41 IMRT plans for patients with nasopharyngeal cancer treated at our institution between October 1997 and March 2002 were selected for the study. The gEUD and mEUD were calculated for the planning gross tumor volume (pGTV), planning clinical tumor volume (pCTV), parotid glands, and spinal cord. The prescription dose for these patients was 70 Gy to >95% of the pGTV and 59.4 Gy to >95% of the pCTV in 33 fractions. Results: The calculated average gEUD was 72.2 ± 2.4 Gy for the pGTV, 54.2 ± 7.1 Gy for the pCTV, 26.7 ± 4.2 Gy for the parotid glands, and 34.1 ± 6.8 Gy for the spinal cord. The calculated average mEUD D o /n o using 33 fractions was 71.7 ± 3.5 Gy for mEUD 70/33 of the pGTV, 49.9 ± 7.9 Gy for mEUD 59.5/33 of the pCTV, 27.6 ± 4.8 Gy for mEUD 26/33 of the parotid glands, and 32.7 ± 7.8 Gy for mEUD 45/33 of the spinal cord. Conclusion: The proposed mEUD, combining the gEUD with the biologically effective dose, preserves all advantages of the gEUD while reflecting the fractionation effects and linear and quadratic survival characteristics

  4. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    International Nuclear Information System (INIS)

    Suwazono, Yasushi; Nogawa, Kazuhiro; Uetani, Mirei; Nakada, Satoru; Kido, Teruhiko; Nakagawa, Hideaki

    2011-01-01

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and β2-microglobulin (β2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for β2-MG was 3.5 μg/g creatinine in men and 3.7 μg/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  5. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Suwazono, Yasushi, E-mail: suwa@faculty.chiba-u.jp [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nogawa, Kazuhiro; Uetani, Mirei [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nakada, Satoru [Safety and Health Organization, Chiba University, 1-33 Yayoicho, Inageku, Chiba 263-8522 (Japan); Kido, Teruhiko [Department of Community Health Nursing, Kanazawa University School of Health Sciences, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942 (Japan); Nakagawa, Hideaki [Department of Epidemiology and Public Health, Kanazawa Medical University, 1-1 Daigaku, Uchnada, Ishikawa 920-0293 (Japan)

    2011-02-15

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and {beta}2-microglobulin ({beta}2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for {beta}2-MG was 3.5 {mu}g/g creatinine in men and 3.7 {mu}g/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  6. Computerized simulation methods for dose reduction, in radiodiagnosis

    International Nuclear Information System (INIS)

    Brochi, M.A.C.

    1990-01-01

    The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)

  7. Benchmarking pKa prediction methods for Lys115 in acetoacetate decarboxylase.

    Science.gov (United States)

    Liu, Yuli; Patel, Anand H G; Burger, Steven K; Ayers, Paul W

    2017-05-01

    Three different pK a prediction methods were used to calculate the pK a of Lys115 in acetoacetate decarboxylase (AADase): the empirical method PROPKA, the multiconformation continuum electrostatics (MCCE) method, and the molecular dynamics/thermodynamic integration (MD/TI) method with implicit solvent. As expected, accurate pK a prediction of Lys115 depends on the protonation patterns of other ionizable groups, especially the nearby Glu76. However, since the prediction methods do not explicitly sample the protonation patterns of nearby residues, this must be done manually. When Glu76 is deprotonated, all three methods give an incorrect pK a value for Lys115. If protonated, Glu76 is used in an MD/TI calculation, the pK a of Lys115 is predicted to be 5.3, which agrees well with the experimental value of 5.9. This result agrees with previous site-directed mutagenesis studies, where the mutation of Glu76 (negative charge when deprotonated) to Gln (neutral) causes no change in K m , suggesting that Glu76 has no effect on the pK a shift of Lys115. Thus, we postulate that the pK a of Glu76 is also shifted so that Glu76 is protonated (neutral) in AADase. Graphical abstract Simulated abundances of protonated species as pH is varied.

  8. Benchmarking Relatedness Inference Methods with Genome-Wide Data from Thousands of Relatives.

    Science.gov (United States)

    Ramstetter, Monica D; Dyer, Thomas D; Lehman, Donna M; Curran, Joanne E; Duggirala, Ravindranath; Blangero, John; Mezey, Jason G; Williams, Amy L

    2017-09-01

    Inferring relatedness from genomic data is an essential component of genetic association studies, population genetics, forensics, and genealogy. While numerous methods exist for inferring relatedness, thorough evaluation of these approaches in real data has been lacking. Here, we report an assessment of 12 state-of-the-art pairwise relatedness inference methods using a data set with 2485 individuals contained in several large pedigrees that span up to six generations. We find that all methods have high accuracy (92-99%) when detecting first- and second-degree relationships, but their accuracy dwindles to 76% of relative pairs. Overall, the most accurate methods are Estimation of Recent Shared Ancestry (ERSA) and approaches that compute total IBD sharing using the output from GERMLINE and Refined IBD to infer relatedness. Combining information from the most accurate methods provides little accuracy improvement, indicating that novel approaches, such as new methods that leverage relatedness signals from multiple samples, are needed to achieve a sizeable jump in performance. Copyright © 2017 Ramstetter et al.

  9. Comparison of different dose calculation methods for irregular photon fields

    International Nuclear Information System (INIS)

    Zakaria, G.A.; Schuette, W.

    2000-01-01

    In this work, 4 calculation methods (Wrede method, Clarskon method of sector integration, beam-zone method of Quast and pencil-beam method of Ahnesjoe) are introduced to calculate point doses in different irregular photon fields. The calculations cover a typical mantle field, an inverted Y-field and different blocked fields for 4 and 10 MV photon energies. The results are compared to those of measurements in a water phantom. The Clarkson and the pencil-beam method have been proved to be the methods of equal standard in relation to accuracy. Both of these methods are being distinguished by minimum deviations and applied in our clinical routine work. The Wrede and beam-zone methods deliver useful results to central beam and yet provide larger deviations in calculating points beyond the central axis. (orig.) [de

  10. Methods of determining the effective dose in dental radiology

    International Nuclear Information System (INIS)

    Thilander-Klang, A.; Helmrot, E.

    2010-01-01

    A wide variety of X-ray equipment is used today in dental radiology, including intra-oral, ortho-pan-tomographic, cephalo-metric, cone-beam computed tomography (CBCT) and computed tomography (CT). This raises the question of how the radiation risks resulting from different kinds of examinations should be compared. The risk to the patient is usually expressed in terms of effective dose. However, it is difficult to determine its reliability, and it is difficult to make comparisons, especially when different modalities are used. The classification of the new CBCT units is also problematic as they are sometimes classified as CT units. This will lead to problems in choosing the best dosimetric method, especially when the examination geometry resembles more on an ordinary ortho-pan-tomographic examination, as the axis of rotation is not at the centre of the patient, and small radiation field sizes are used. The purpose of this study was to present different methods for the estimation of the effective dose from the equipment currently used in dental radiology, and to discuss their limitations. The methods are compared based on commonly used measurable and computable dose quantities, and their reliability in the estimation of the effective dose. (authors)

  11. Robust EM Continual Reassessment Method in Oncology Dose Finding

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2012-01-01

    The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092

  12. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    Science.gov (United States)

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi

  13. Benchmarking the DFT+U method for thermochemical calculations of uranium molecular compounds and solids.

    Science.gov (United States)

    Beridze, George; Kowalski, Piotr M

    2014-12-18

    Ability to perform a feasible and reliable computation of thermochemical properties of chemically complex actinide-bearing materials would be of great importance for nuclear engineering. Unfortunately, density functional theory (DFT), which on many instances is the only affordable ab initio method, often fails for actinides. Among various shortcomings, it leads to the wrong estimate of enthalpies of reactions between actinide-bearing compounds, putting the applicability of the DFT approach to the modeling of thermochemical properties of actinide-bearing materials into question. Here we test the performance of DFT+U method--a computationally affordable extension of DFT that explicitly accounts for the correlations between f-electrons - for prediction of the thermochemical properties of simple uranium-bearing molecular compounds and solids. We demonstrate that the DFT+U approach significantly improves the description of reaction enthalpies for the uranium-bearing gas-phase molecular compounds and solids and the deviations from the experimental values are comparable to those obtained with much more computationally demanding methods. Good results are obtained with the Hubbard U parameter values derived using the linear response method of Cococcioni and de Gironcoli. We found that the value of Coulomb on-site repulsion, represented by the Hubbard U parameter, strongly depends on the oxidation state of uranium atom. Last, but not least, we demonstrate that the thermochemistry data can be successfully used to estimate the value of the Hubbard U parameter needed for DFT+U calculations.

  14. Complex absorbing potentials within EOM-CC family of methods: Theory, implementation, and benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Zuev, Dmitry; Jagau, Thomas-C.; Krylov, Anna I. [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States); Bravaya, Ksenia B. [Department of Chemistry, Boston University, Boston, Massachusetts 02215-2521 (United States); Epifanovsky, Evgeny [Department of Chemistry, University of Southern California, Los Angeles, California 90089-0482 (United States); Department of Chemistry, University of California, Berkeley, California 94720 (United States); Q-Chem, Inc., 6601 Owens Drive, Suite 105 Pleasanton, California 94588 (United States); Shao, Yihan [Q-Chem, Inc., 6601 Owens Drive, Suite 105 Pleasanton, California 94588 (United States); Sundstrom, Eric; Head-Gordon, Martin [Department of Chemistry, University of California, Berkeley, California 94720 (United States)

    2014-07-14

    A production-level implementation of equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) for electron attachment and excitation energies augmented by a complex absorbing potential (CAP) is presented. The new method enables the treatment of metastable states within the EOM-CC formalism in a similar manner as bound states. The numeric performance of the method and the sensitivity of resonance positions and lifetimes to the CAP parameters and the choice of one-electron basis set are investigated. A protocol for studying molecular shape resonances based on the use of standard basis sets and a universal criterion for choosing the CAP parameters are presented. Our results for a variety of π{sup *} shape resonances of small to medium-size molecules demonstrate that CAP-augmented EOM-CCSD is competitive relative to other theoretical approaches for the treatment of resonances and is often able to reproduce experimental results.

  15. The D1 method: career dose estimation from a combination of historical monitoring data and a single year's dose data

    International Nuclear Information System (INIS)

    Sont, W.N.

    1995-01-01

    A method is introduced to estimate career doses from a combination of historical monitoring data and a single year's dose data. This method, called D1 eliminates the bias arising from incorporating historical dose data from times when occupational doses were generally much higher than they are today. Doses calculated by this method are still conditional on the preservation of the status quo in the effectiveness of radiation protection. The method takes into account the variation of the annual dose, and of the probability of being monitored, with the time elapsed since the start of a career. It also allows for the calculation of a standard error of the projected career dose. Results from recent Canadian dose data are presented. (author)

  16. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    International Nuclear Information System (INIS)

    Malin, Wahlberg; Imre, Pazsit

    2005-01-01

    The purpose of this paper is to demonstrate the use of the invariant embedding method in a series of model transport problems, for which it is also possible to obtain an analytical solution. Due to the non-linear character of the embedding equations, their solution can only be obtained numerically. However, this can be done via a robust and effective iteration scheme. In return, the domain of applicability is far wider than the model problems investigated in this paper. The use of the invariant embedding method is demonstrated in three different areas. The first is the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production. Both constant and energy dependent cross sections with a power law dependence were used in the calculations. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel and unexpected application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and a half-space are interrelated through embedding-like integral equations, by the solution of which the reflected flux from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases the invariant embedding method proved to be robust, fast and monotonically converging to the exact solutions. (authors)

  17. Benchmarking the performance of fixed-image receptor digital radiographic systems part 1: a novel method for image quality analysis.

    Science.gov (United States)

    Lee, Kam L; Ireland, Timothy A; Bernardo, Michael

    2016-06-01

    This is the first part of a two-part study in benchmarking the performance of fixed digital radiographic general X-ray systems. This paper concentrates on reporting findings related to quantitative analysis techniques used to establish comparative image quality metrics. A systematic technical comparison of the evaluated systems is presented in part two of this study. A novel quantitative image quality analysis method is presented with technical considerations addressed for peer review. The novel method was applied to seven general radiographic systems with four different makes of radiographic image receptor (12 image receptors in total). For the System Modulation Transfer Function (sMTF), the use of grid was found to reduce veiling glare and decrease roll-off. The major contributor in sMTF degradation was found to be focal spot blurring. For the System Normalised Noise Power Spectrum (sNNPS), it was found that all systems examined had similar sNNPS responses. A mathematical model is presented to explain how the use of stationary grid may cause a difference between horizontal and vertical sNNPS responses.

  18. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1995-01-01

    This report is a compilation of the information submitted by AECL, CIAE, JAERI, ORNL and Siemens in response to a need identified at the 'Workshop on R and D Needs' at the IGORR-3 meeting. The survey compiled information on the national standards applied to the Safety Quality Assurance (SQA) programs undertaken by the participants. Information was assembled for the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods used to verify and validate the codes and libraries. Although the survey was not comprehensive, it provides a basis for exchanging information of common interest to the research reactor community

  19. Comparison between calculation methods of dose rates in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    Vianello, E.A.; Biaggio, M.F.; D R, M.F.; Almeida, C.E. de

    1998-01-01

    In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)

  20. Comparing different methods for estimating radiation dose to the conceptus

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Rendon, X.; Dedulle, A. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); Walgraeve, M.S.; Woussen, S.; Zhang, G. [University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Bosmans, H. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); University Hospitals Leuven, Department of Radiology, Leuven (Belgium); Zanca, F. [KU Leuven, Department of Imaging and Pathology, Division of Medical Physics and Quality Assessment, Herestraat 49, box 7003, Leuven (Belgium); GE Healthcare, Buc (France)

    2017-02-15

    To compare different methods available in the literature for estimating radiation dose to the conceptus (D{sub conceptus}) against a patient-specific Monte Carlo (MC) simulation and a commercial software package (CSP). Eight voxel models from abdominopelvic CT exams of pregnant patients were generated. D{sub conceptus} was calculated with an MC framework including patient-specific longitudinal tube current modulation (TCM). For the same patients, dose to the uterus, D{sub uterus}, was calculated as an alternative for D{sub conceptus}, with a CSP that uses a standard-size, non-pregnant phantom and a generic TCM curve. The percentage error between D{sub uterus} and D{sub conceptus} was studied. Dose to the conceptus and percent error with respect to D{sub conceptus} was also estimated for three methods in the literature. The percentage error ranged from -15.9% to 40.0% when comparing MC to CSP. When comparing the TCM profiles with the generic TCM profile from the CSP, differences were observed due to patient habitus and conceptus position. For the other methods, the percentage error ranged from -30.1% to 13.5% but applicability was limited. Estimating an accurate D{sub conceptus} requires a patient-specific approach that the CSP investigated cannot provide. Available methods in the literature can provide a better estimation if applicable to patient-specific cases. (orig.)

  1. What is the best practice for benchmark regulation of electricity distribution? Comparison of DEA, SFA and StoNED methods

    International Nuclear Information System (INIS)

    Kuosmanen, Timo; Saastamoinen, Antti; Sipiläinen, Timo

    2013-01-01

    Electricity distribution is a natural local monopoly. In many countries, the regulators of this sector apply frontier methods such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) to estimate the efficient cost of operation. In Finland, a new StoNED method was adopted in 2012. This paper compares DEA, SFA and StoNED in the context of regulating electricity distribution. Using data from Finland, we compare the impacts of methodological choices on cost efficiency estimates and acceptable cost. While the efficiency estimates are highly correlated, the cost targets reveal major differences. In addition, we examine performance of the methods by Monte Carlo simulations. We calibrate the data generation process (DGP) to closely match the empirical data and the model specification of the regulator. We find that the StoNED estimator yields a root mean squared error (RMSE) of 4% with the sample size 100. Precision improves as the sample size increases. The DEA estimator yields an RMSE of approximately 10%, but performance deteriorates as the sample size increases. The SFA estimator has an RMSE of 144%. The poor performance of SFA is due to the wrong functional form and multicollinearity. - Highlights: • We compare DEA, SFA and StoNED methods in the context of regulation of electricity distribution. • Both empirical comparisons and Monte Carlo simulations are presented. • Choice of benchmarking method has a significant economic impact on the regulatory outcomes. • StoNED yields the most precise results in the Monte Carlo simulations. • Five lessons concerning heterogeneity, noise, frontier, simulations, and implementation

  2. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1996-03-01

    During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs

  3. Fully automated treatment planning for head and neck radiotherapy using a voxel-based dose prediction and dose mimicking method

    Science.gov (United States)

    McIntosh, Chris; Welch, Mattea; McNiven, Andrea; Jaffray, David A.; Purdie, Thomas G.

    2017-08-01

    Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present a probabilistic, atlas-based approach which predicts the dose for novel patients using a set of automatically selected most similar patients (atlases). The output is a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces the need to specify and tune dose-volume objectives. Voxel-based dose mimicking optimization then converts the predicted dose distribution to a complete treatment plan with dose calculation using a collapsed cone convolution dose engine. In this study, we investigated automated planning for right-sided oropharaynx head and neck patients treated with IMRT and VMAT. We compare four versions of our dose prediction pipeline using a database of 54 training and 12 independent testing patients by evaluating 14 clinical dose evaluation criteria. Our preliminary results are promising and demonstrate that automated methods can generate comparable dose distributions to clinical. Overall, automated plans achieved an average of 0.6% higher dose for target coverage evaluation criteria, and 2.4% lower dose at the organs at risk criteria levels evaluated compared with clinical. There was no statistically significant difference detected in high-dose conformity between automated and clinical plans as measured by the conformation number. Automated plans achieved nine more unique criteria than clinical across the 12 patients tested and automated plans scored a significantly higher dose at the evaluation limit for two high-risk target coverage criteria and a significantly lower dose in one critical organ maximum dose. The novel dose prediction method with dose mimicking can generate complete treatment plans in 12-13 min without user interaction. It is a promising approach for fully automated treatment

  4. Iterative methods for dose reduction and image enhancement in tomography

    Science.gov (United States)

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  5. Phototransfer method of determining archaeological dose of pottery sherds

    International Nuclear Information System (INIS)

    Sasidharan, R.; Sunta, C.M.; Nambi, K.S.V.

    1978-01-01

    The method of PTTL (phototransfer thermoluminescence) dating works satisfactorily over a very wide range of archaeological doses and the upper limit seems to be around 4000 rads. The first TL peak generally occurs at 65deg C or 100deg C in different varieties of quartz and both seem to be satisfactory for PTTL dating if the reading procedure is standardised to take care of the fast fading of these peaks at room temperature. (author)

  6. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  7. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators

    International Nuclear Information System (INIS)

    Belley, Matthew D.; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J.; Chen, Benny J.; Dewhirst, Mark W.; Yoshizumi, Terry T.

    2014-01-01

    Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs

  8. How diverse are diversity assessment methods? A comparative analysis and benchmarking of molecular descriptor space.

    Science.gov (United States)

    Koutsoukas, Alexios; Paricharak, Shardul; Galloway, Warren R J D; Spring, David R; Ijzerman, Adriaan P; Glen, Robert C; Marcus, David; Bender, Andreas

    2014-01-27

    Chemical diversity is a widely applied approach to select structurally diverse subsets of molecules, often with the objective of maximizing the number of hits in biological screening. While many methods exist in the area, few systematic comparisons using current descriptors in particular with the objective of assessing diversity in bioactivity space have been published, and this shortage is what the current study is aiming to address. In this work, 13 widely used molecular descriptors were compared, including fingerprint-based descriptors (ECFP4, FCFP4, MACCS keys), pharmacophore-based descriptors (TAT, TAD, TGT, TGD, GpiDAPH3), shape-based descriptors (rapid overlay of chemical structures (ROCS) and principal moments of inertia (PMI)), a connectivity-matrix-based descriptor (BCUT), physicochemical-property-based descriptors (prop2D), and a more recently introduced molecular descriptor type (namely, "Bayes Affinity Fingerprints"). We assessed both the similar behavior of the descriptors in assessing the diversity of chemical libraries, and their ability to select compounds from libraries that are diverse in bioactivity space, which is a property of much practical relevance in screening library design. This is particularly evident, given that many future targets to be screened are not known in advance, but that the library should still maximize the likelihood of containing bioactive matter also for future screening campaigns. Overall, our results showed that descriptors based on atom topology (i.e., fingerprint-based descriptors and pharmacophore-based descriptors) correlate well in rank-ordering compounds, both within and between descriptor types. On the other hand, shape-based descriptors such as ROCS and PMI showed weak correlation with the other descriptors utilized in this study, demonstrating significantly different behavior. We then applied eight of the molecular descriptors compared in this study to sample a diverse subset of sample compounds (4%) from an

  9. Optimizing CT radiation dose based on patient size and image quality: the size-specific dose estimate method

    Energy Technology Data Exchange (ETDEWEB)

    Larson, David B. [Stanford University School of Medicine, Department of Radiology, Stanford, CA (United States)

    2014-10-15

    The principle of ALARA (dose as low as reasonably achievable) calls for dose optimization rather than dose reduction, per se. Optimization of CT radiation dose is accomplished by producing images of acceptable diagnostic image quality using the lowest dose method available. Because it is image quality that constrains the dose, CT dose optimization is primarily a problem of image quality rather than radiation dose. Therefore, the primary focus in CT radiation dose optimization should be on image quality. However, no reliable direct measure of image quality has been developed for routine clinical practice. Until such measures become available, size-specific dose estimates (SSDE) can be used as a reasonable image-quality estimate. The SSDE method of radiation dose optimization for CT abdomen and pelvis consists of plotting SSDE for a sample of examinations as a function of patient size, establishing an SSDE threshold curve based on radiologists' assessment of image quality, and modifying protocols to consistently produce doses that are slightly above the threshold SSDE curve. Challenges in operationalizing CT radiation dose optimization include data gathering and monitoring, managing the complexities of the numerous protocols, scanners and operators, and understanding the relationship of the automated tube current modulation (ATCM) parameters to image quality. Because CT manufacturers currently maintain their ATCM algorithms as secret for proprietary reasons, prospective modeling of SSDE for patient populations is not possible without reverse engineering the ATCM algorithm and, hence, optimization by this method requires a trial-and-error approach. (orig.)

  10. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  11. Radiological environmental dose assessment methods and compliance dose results for 2015 operations at the Savannah River Site

    International Nuclear Information System (INIS)

    Jannik, G. T.; Dixon, K. L.

    2016-01-01

    This report presents the environmental dose assessment methods and the estimated potential doses to the offsite public from 2015 Savannah River Site (SRS) atmospheric and liquid radioactive releases. Also documented are potential doses from special-case exposure scenarios - such as the consumption of deer meat, fish, and goat milk.

  12. Radiological environmental dose assessment methods and compliance dose results for 2015 operations at the Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, G. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Dixon, K. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-09-01

    This report presents the environmental dose assessment methods and the estimated potential doses to the offsite public from 2015 Savannah River Site (SRS) atmospheric and liquid radioactive releases. Also documented are potential doses from special-case exposure scenarios - such as the consumption of deer meat, fish, and goat milk.

  13. Optimized dose distribution of a high dose rate vaginal cylinder

    International Nuclear Information System (INIS)

    Li Zuofeng; Liu, Chihray; Palta, Jatinder R.

    1998-01-01

    Purpose: To present a comparison of optimized dose distributions for a set of high-dose-rate (HDR) vaginal cylinders calculated by a commercial treatment-planning system with benchmark calculations using Monte-Carlo-calculated dosimetry data. Methods and Materials: Optimized dose distributions using both an isotropic and an anisotropic dose calculation model were obtained for a set of HDR vaginal cylinders. Mathematical optimization techniques available in the computer treatment-planning system were used to calculate dwell times and positions. These dose distributions were compared with benchmark calculations with TG43 formalism and using Monte-Carlo-calculated data. The same dwell times and positions were used for a quantitative comparison of dose calculated with three dose models. Results: The isotropic dose calculation model can result in discrepancies as high as 50%. The anisotropic dose calculation model compared better with benchmark calculations. The differences were more significant at the apex of the vaginal cylinder, which is typically used as the prescription point. Conclusion: Dose calculation models available in a computer treatment-planning system must be evaluated carefully to ensure their correct application. It should also be noted that when optimized dose distribution at a distance from the cylinder surface is calculated using an accurate dose calculation model, the vaginal mucosa dose becomes significantly higher, and therefore should be carefully monitored

  14. [Evaluation of methods to calculate dialysis dose in daily hemodialysis].

    Science.gov (United States)

    Maduell, F; Gutiérrez, E; Navarro, V; Torregrosa, E; Martínez, A; Rius, A

    2003-01-01

    Daily dialysis has shown excellent clinical results because a higher frequency of dialysis is more physiological. Different methods have been described to calculate dialysis dose which take into consideration change in frequency. The aim of this study was to calculate all dialysis dose possibilities and evaluate the better and practical options. Eight patients, 6 males and 2 females, on standard 4 to 5 hours thrice weekly on-line hemodiafiltration (S-OL-HDF) were switched to daily on-line hemodiafiltration (D-OL-HDF) 2 to 2.5 hours six times per week. Dialysis parameters were identical during both periods and only frequency and dialysis time of each session were changed. Time average concentration (TAC), time average deviation (TAD), normalized protein catabolic rate (nPCR), Kt/V, equilibrated Kt/V (eKt/V), equivalent renal urea clearance (EKR), standard Kt/V (stdKt/V), urea reduction ratio (URR), hemodialysis product and time off dialysis were measured. Daily on-line hemodiafiltration was well accepted and tolerated. Patients maintained the same TAC although TAD decreased from 9.7 +/- 2 in baseline to a 6.2 +/- 2 mg/dl after six months, p time off dialysis was reduced to half. Dialysis frequency is an important urea kinetic parameter which there are to take in consideration. It's necessary to use EKR, stdKt/V or weekly URR to calculate dialysis dose for an adequate comparison between different frequency dialysis schedules.

  15. Dosing method of physical activity in aerobics classes for students

    Directory of Open Access Journals (Sweden)

    Yu.I. Beliak

    2014-10-01

    Full Text Available Purpose : reasons for the method of dosing of physical activity in aerobics classes for students. The basis of the method is the evaluation of the metabolic cost of funds used in them. Material : experiment involved the assessment of the pulse response of students to load complexes classical and step aerobics (n = 47, age 20-23 years. In complexes used various factors regulating the intensity: perform combinations of basic steps, involvement of movements with his hands, holding in hands dumbbells weighing 1kg increase in the rate of musical accompaniment, varying heights step platform. Results . on the basis of the relationship between heart rate and oxygen consumption was determined by the energy cost of each admission control load intensity. This indicator has been used to justify the intensity and duration of multiplicity aerobics. Figure correspond to the level of physical condition and motor activity deficits students. Conclusions : the estimated component of this method of dosing load makes it convenient for use in automated computer programs. Also it can be easily modified to dispense load other types of recreational fitness.

  16. A new method for dosing uranium in biological media

    International Nuclear Information System (INIS)

    Henry, Ph.; Kobisch, Ch.

    1964-01-01

    This report describes a new method for dosing uranium in biological media based on measurement of alpha activity. After treatment of the sample with a mineral acid, the uranium is reduced to the valency four by trivalent titanium and is precipitated as phosphate in acid solution. The uranium is then separated from the titanium by precipitation as UF 4 with lanthanum as carrier. A slight modification, unnecessary in the case of routine analyses, makes it possible to eliminate other possible alpha emitters (thorium and transuranic elements). (authors) [fr

  17. Analysis of Cumulative Dose to Implanted Pacemaker According to Various IMRT Delivery Methods: Optimal Dose Delivery Versus Dose Reduction Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Woo; Hong, Se Mie [Dept. of Radiation Oncology, Konkuk University Medical Center, Seoul (Korea, Republic of)

    2011-11-15

    Cancer patients with implanted cardiac pacemaker occasionally require radiotherapy. Pacemaker may be damaged or malfunction during radiotherapy due to ionizing radiation or electromagnetic interference. Although radiotherapy should be planned to keep the dose to pacemaker as low as possible not to malfunction ideally, current radiation treatment planning (RTP) system does not accurately calculate deposited dose to adjacent field border or area beyond irradiated fields. In terms of beam delivery techniques using multiple intensity modulated fields, dosimetric effect of scattered radiation in high energy photon beams is required to be detailed analyzed based on measurement data. The aim of this study is to evaluate dose discrepancies of pacemaker in a RTP system as compared to measured doses. We also designed dose reduction strategy limited value of 2 Gy for radiation treatment patients with cardiac implanted pacemaker. Total accumulated dose of 145 cGy based on in-vivo dosimetry was satisfied with the recommendation criteria to prevent malfunction of pacemaker in SS technique. However, the 2 mm lead shielder enabled the scattered doses to reduce up to 60% and 40% in the patient and the phantom, respectively. The SS technique with the lead shielding could reduce the accumulated scattered doses less than 100 cGy. Calculated and measured doses were not greatly affected by the beam delivery techniques. In-vivo and measured doses on pacemaker position showed critical dose discrepancies reaching up to 4 times as compared to planned doses in RTP. The current SS technique could deliver lower scattered doses than recommendation criteria, but use of 2 mm lead shielder contributed to reduce scattered doses by 60%. The tertiary lead shielder can be useful to prevent malfunction or electrical damage of implanted pacemakers during radiotherapy. It is required to estimate more accurate scattered doses of the patient or medical device in RTP to design proper dose reduction strategy.

  18. Analysis of Cumulative Dose to Implanted Pacemaker According to Various IMRT Delivery Methods: Optimal Dose Delivery Versus Dose Reduction Strategy

    International Nuclear Information System (INIS)

    Lee, Jeong Woo; Hong, Se Mie

    2011-01-01

    Cancer patients with implanted cardiac pacemaker occasionally require radiotherapy. Pacemaker may be damaged or malfunction during radiotherapy due to ionizing radiation or electromagnetic interference. Although radiotherapy should be planned to keep the dose to pacemaker as low as possible not to malfunction ideally, current radiation treatment planning (RTP) system does not accurately calculate deposited dose to adjacent field border or area beyond irradiated fields. In terms of beam delivery techniques using multiple intensity modulated fields, dosimetric effect of scattered radiation in high energy photon beams is required to be detailed analyzed based on measurement data. The aim of this study is to evaluate dose discrepancies of pacemaker in a RTP system as compared to measured doses. We also designed dose reduction strategy limited value of 2 Gy for radiation treatment patients with cardiac implanted pacemaker. Total accumulated dose of 145 cGy based on in-vivo dosimetry was satisfied with the recommendation criteria to prevent malfunction of pacemaker in SS technique. However, the 2 mm lead shielder enabled the scattered doses to reduce up to 60% and 40% in the patient and the phantom, respectively. The SS technique with the lead shielding could reduce the accumulated scattered doses less than 100 cGy. Calculated and measured doses were not greatly affected by the beam delivery techniques. In-vivo and measured doses on pacemaker position showed critical dose discrepancies reaching up to 4 times as compared to planned doses in RTP. The current SS technique could deliver lower scattered doses than recommendation criteria, but use of 2 mm lead shielder contributed to reduce scattered doses by 60%. The tertiary lead shielder can be useful to prevent malfunction or electrical damage of implanted pacemakers during radiotherapy. It is required to estimate more accurate scattered doses of the patient or medical device in RTP to design proper dose reduction strategy.

  19. Dose rate reduction method for NMCA applied BWR plants

    International Nuclear Information System (INIS)

    Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas

    2012-09-01

    BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test

  20. Intercomparison of the finite difference and nodal discrete ordinates and surface flux transport methods for a LWR pool-reactor benchmark problem in X-Y geometry

    International Nuclear Information System (INIS)

    O'Dell, R.D.; Stepanek, J.; Wagner, M.R.

    1983-01-01

    The aim of the present work is to compare and discuss the three of the most advanced two dimensional transport methods, the finite difference and nodal discrete ordinates and surface flux method, incorporated into the transport codes TWODANT, TWOTRAN-NODAL, MULTIMEDIUM and SURCU. For intercomparison the eigenvalue and the neutron flux distribution are calculated using these codes in the LWR pool reactor benchmark problem. Additionally the results are compared with some results obtained by French collision probability transport codes MARSYAS and TRIDENT. Because the transport solution of this benchmark problem is close to its diffusion solution some results obtained by the finite element diffusion code FINELM and the finite difference diffusion code DIFF-2D are included

  1. Benchmark calculations of power distribution within fuel assemblies. Phase 2: comparison of data reduction and power reconstruction methods in production codes

    International Nuclear Information System (INIS)

    2000-01-01

    Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)

  2. A study of different dose calculation methods and the impact on the dose evaluation protocol in lung stereotactic radiation therapy

    International Nuclear Information System (INIS)

    Takada, Takahiro; Furuya, Tomohisa; Ozawa, Shuichi; Ito, Kana; Kurokawa, Chie; Karasawa, Kumiko; Miura, Kohei

    2008-01-01

    AAA (analytical anisotropic algorithm) dose calculation, which shows a better performance for heterogeneity correction, was tested for lung stereotactic radiation therapy (SBRT) in comparison to conventional PBC (pencil beam convolution method) to evaluate its impact on tumor dose parameters. Eleven lung SBRT patients who were treated with photon 4 MV beams in our department between April 2003 and February 2007 were reviewed. Clinical target volume (CTV) was delineated including the spicula region on planning CT images. Planning target volume (PTV) was defined by adding the internal target volume (ITV) and set-up margin (SM) of 5 mm from CTV, and then an multileaf collimator (MLC) penumbra margin of another 5 mm was also added. Six-port non-coplanar beams were employed, and a total prescribed dose of 48 Gy was defined at the isocenter point with four fractions. The entire treatment for an individual patient was completed within 8 days. Under the same prescribed dose, calculated dose distribution, dose volume histogram (DVH), and tumor dose parameters were compared between two dose calculation methods. In addition, the fractionated prescription dose was repeatedly scaled until the monitor units (MUs) calculated by AAA reached a level of MUs nearly identical to those achieved by PBC. AAA resulted in significantly less D95 (irradiation dose that included 95% volume of PTV) and minimal dose in PTV compared to PBC. After rescaling of each MU for each beam in the AAA plan, there was no revision of the isocenter of the prescribed dose required. However, when the PTV volume was less than 20 cc, a 4% lower prescription resulted in nearly identical MUs between AAA and PBC. The prescribed dose in AAA should be the same as that in PBC, if the dose is administered at the isocenter point. However, planners should compare DVHs and dose distributions between AAA and PBC for a small lung tumor with a PTV volume less than approximately 20 cc. (author)

  3. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  4. Gamma ray benchmark on the spent fuel shipping cask TN 12

    International Nuclear Information System (INIS)

    Blum, P.; Cagnon, R.; Cladel, C.; Ermont, G.; Nimal, J.C.

    1983-05-01

    The purpose of this benchmark is to compare measurements and calculation of gamma-ray dose rates around a shipping cask loaded with 12 spent fuel elements of FESSENHEIM PWR type. The benchmark provides a means to verify gamma-ray sources and gamma-ray transport calculation methods in shipping cask configurations. The comparison between measurements and calculations shows a good agreement except near the fuel element top where the discrepancy reaches a factor 2

  5. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H.

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput

  6. Experimental method research on neutron equal dose-equivalent detection

    International Nuclear Information System (INIS)

    Ji Changsong

    1995-10-01

    The design principles of neutron dose-equivalent meter for neutron biological equi-effect detection are studied. Two traditional principles 'absorption net principle' and 'multi-detector principle' are discussed, and on the basis of which a new theoretical principle for neutron biological equi-effect detection--'absorption stick principle' has been put forward to place high hope on both increasing neutron sensitivity of this type of meters and overcoming the shortages of the two traditional methods. In accordance with this new principle a brand-new model of neutron dose-equivalent meter BH3105 has been developed. Its neutron sensitivity reaches 10 cps/(μSv·h -1 ), 18∼40 times higher than that of all the same kinds of meters 0.23∼0.56 cps/(μSv·h -1 ), available today at home and abroad and the specifications of the newly developed meter reach or surpass the levels of the same kind of meters. Therefore the new theoretical principle of neutron biological equi-effect detection--'absorption stick principle' is proved to be scientific, advanced and useful by experiments. (3 refs., 3 figs., 2 tabs.)

  7. Absorbed dose determination in photon fields using the tandem method

    International Nuclear Information System (INIS)

    Marques Pachas, J.F.

    1999-01-01

    The purpose of this work is to develop an alternative method to determine the absorbed dose and effective energy of photons with unknown spectral distributions. It includes a 'tandem' system that consists of two thermoluminescent dosemeters with different energetic dependence. LiF: Mg, Ti, CaF 2 : Dy thermoluminescent dosemeters and a Harshaw 3500 reading system are employed. Dosemeters are characterized with 90 Sr- 90 Y, calibrated with the energy of 60 Co and irradiated with seven different qualities of x-ray beams, suggested by ANSI No. 13 and ISO 4037. The answers of each type of dosemeter are adjusted to a function that depends on the effective energy of photons. The adjustment is carried out by means of the Rosenbrock minimization algorithm. The mathematical model used for this function includes five parameters and has a gauss and a straight line. Results show that the analytical functions reproduce the experimental data of the answers, with a margin of error of less than 5%. The reason of the answers of the CaF 2 : Dy and LiF: Mg, Ti, according to the energy of the radiation, allows us to establish the effective energy of photons and the absorbed dose, with a margin of error of less than 10% and 20% respectively

  8. A track length estimator method for dose calculations in low-energy X-ray irradiations. Implementation, properties and performance

    Energy Technology Data Exchange (ETDEWEB)

    Baldacci, F.; Delaire, F.; Letang, J.M.; Sarrut, D.; Smekens, F.; Freud, N. [Lyon-1 Univ. - CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Centre Leon Berard (France); Mittone, A.; Coan, P. [LMU Munich (Germany). Dept. of Physics; LMU Munich (Germany). Faculty of Medicine; Bravin, A.; Ferrero, C. [European Synchrotron Radiation Facility, Grenoble (France); Gasilov, S. [LMU Munich (Germany). Dept. of Physics

    2015-05-01

    The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.

  9. A BENCHMARK PROGRAM FOR EVALUATION OF METHODS FOR COMPUTING SEISMIC RESPONSE OF COUPLED BUILDING-PIPING/EQUIPMENT WITH NON-CLASSICAL DAMPING

    International Nuclear Information System (INIS)

    Xu, J.; Degrassi, G.; Chokshi, N.

    2001-01-01

    Under the auspices of the US Nuclear Regulatory Commission (NRC), Brookhaven National Laboratory (BNL) developed a comprehensive program to evaluate state-of-the-art methods and computer programs for seismic analysis of typical coupled nuclear power plant (NPP) systems with nonclassical damping. In this program, four benchmark models of coupled building-piping/equipment systems with different damping characteristics were analyzed for a suite of earthquakes by program participants applying their uniquely developed methods and computer programs. This paper presents the results of their analyses, and their comparison to the benchmark solutions generated by BNL using time domain direct integration methods. The participant's analysis results established using complex modal time history methods showed good comparison with the BNL solutions, while the analyses produced with either complex-mode response spectrum methods or classical normal-mode response spectrum method, in general, produced more conservative results, when averaged over a suite of earthquakes. However, when coupling due to damping is significant, complex-mode response spectrum methods performed better than the classical normal-mode response spectrum method. Furthermore, as part of the program objectives, a parametric assessment is also presented in this paper, aimed at evaluation of the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled NPP systems. It is believed that the findings and insights learned from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems

  10. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  11. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  13. Repeated dose titration versus age-based method in electroconvulsive therapy: a pilot study

    NARCIS (Netherlands)

    Aten, J.J.; Oudega, M.L.; van Exel, E.; Stek, M.L.; van Waarde, J.A.

    2015-01-01

    In electroconvulsive therapy (ECT), a dose titration method (DTM) was suggested to be more individualized and therefore more accurate than formula-based dosing methods. A repeated DTM (every sixth session and dose adjustment accordingly) was compared to an age-based method (ABM) regarding treatment

  14. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    Science.gov (United States)

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-04-04

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.

  15. Risk and dose assessment methods in gamma knife QA

    International Nuclear Information System (INIS)

    Banks, W.W.; Jones, E.D.; Rathbun, P.

    1992-10-01

    Traditional methods used in assessing risk in nuclear power plants may be inappropriate to use in assessing medical radiation risks. The typical philosophy used in assessing nuclear reactor risks is machine dominated with only secondary attention paid to the human component, and only after critical machine failure events have been identified. In assessing the risk of a misadministrative radiation dose to patients, the primary source of failures seems to stem overwhelmingly, from the actions of people and only secondarily from machine mode failures. In essence, certain medical misadministrations are dominated by human events not machine failures. Radiological medical devices such as the Leksell Gamma Knife are very simple in design, have few moving parts, and are relatively free from the risks of wear when compared with a nuclear power plant. Since there are major technical differences between a gamma knife and a nuclear power plant, one must select a particular risk assessment method which is sensitive to these system differences and tailored to the unique medical aspects of the phenomena under study. These differences also generate major shifts in the philosophy and assumptions which drive the risk assessment (Machine-centered vs Person-centered) method. We were prompted by these basic differences to develop a person-centered approach to risk assessment which would reflect these basic philosophical and technological differences, have the necessary resolution in its metrics, and be highly reliable (repeatable). The risk approach chosen by the Livermore investigative team has been called the ''Relative Risk Profile Method'' and has been described in detail by Banks and Paramore, (1983)

  16. Evaluation of mathematical methods for predicting optimum dose of gamma radiation in sugarcane (Saccharum sp.)

    International Nuclear Information System (INIS)

    Wu, K.K.; Siddiqui, S.H.; Heinz, D.J.; Ladd, S.L.

    1978-01-01

    Two mathematical methods - the reversed logarithmic method and the regression method - were used to compare the predicted and the observed optimum gamma radiation dose (OD 50 ) in vegetative propagules of sugarcane. The reversed logarithmic method, usually used in sexually propagated crops, showed the largest difference between the predicted and observed optimum dose. The regression method resulted in a better prediction of the observed values and is suggested as a better method for the prediction of optimum dose for vegetatively propagated crops. (author)

  17. Generation of uniformly distributed dose points for anatomy-based three-dimensional dose optimization methods in brachytherapy.

    Science.gov (United States)

    Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N

    2000-05-01

    We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.

  18. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    OpenAIRE

    Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2016-01-01

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the d...

  19. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  20. Facilitating organisational development using a group-based formative assessment and benchmarking method: design and implementation of the International Family Practice Maturity Matrix.

    Science.gov (United States)

    Elwyn, Glyn; Bekkers, Marie-Jet; Tapp, Laura; Edwards, Adrian; Newcombe, Robert; Eriksson, Tina; Braspenning, Jozé; Kuch, Christine; Adzic, Zlata Ozvacic; Ayankogbe, Olayinka; Cvetko, Tatjana; In 't Veld, Kees; Karotsis, Antonis; Kersnik, Janko; Lefebvre, Luc; Mecini, Ilir; Petricek, Goranka; Pisco, Luis; Thesen, Janecke; Turón, José María; van Rossen, Edward; Grol, Richard

    2010-12-01

    Well-organised practices deliver higher-quality care. Yet there has been very little effort so far to help primary care organisations achieve higher levels of team performance and to help them identify and prioritise areas where quality improvement efforts should be concentrated. No attempt at all has been made to achieve a method which would be capable of providing comparisons--and the stimulus for further improvement--at an international level. The development of the International Family Practice Maturity Matrix took place in three phases: (1) selection and refinement of organisational dimensions; (2) development of incremental scales based on a recognised theoretical framework; and (3) testing the feasibility of the approach on an international basis, including generation of an automated web-based benchmarking system. This work has demonstrated the feasibility of developing an organisational assessment tool for primary care organisations that is sufficiently generic to cross international borders and is applicable across a diverse range of health settings, from state-organised systems to insurer-based health economies. It proved possible to introduce this assessment method in 11 countries in Europe and one in Africa, and to generate comparison benchmarks based on the data collected. The evaluation of the assessment process was uniformly positive with the view that the approach efficiently enables the identification of priorities for organisational development and quality improvement at the same time as motivating change by virtue of the group dynamics. We are not aware of any other organisational assessment method for primary care which has been 'born international,' and that has involved attention to theory, dimension selection and item refinement. The principal aims were to achieve an organisational assessment which gains added value by using interaction, engagement comparative benchmarks: aims which have been achieved. The next step is to achieve wider

  1. Estimation of benchmark dose as the threshold levels of urinary cadmium, based on excretion of total protein, β 2-microglobulin, and N-acetyl-β-D-glucosaminidase in cadmium nonpolluted regions in Japan

    International Nuclear Information System (INIS)

    Kobayashi, Etsuko; Suwazono, Yasushi; Uetani, Mirei; Inaba, Takeya; Oishi, Mitsuhiro; Kido, Teruhiko; Nishijo, Muneko; Nakagawa, Hideaki; Nogawa, Koji

    2006-01-01

    Previously, we investigated the association between urinary cadmium (Cd) concentration and indicators of renal dysfunction, including total protein, β 2 -microglobulin (β 2 -MG), and N-acetyl-β-D-glucosaminidase (NAG). In 2778 inhabitants ≥50 years of age (1114 men, 1664 women) in three different Cd nonpolluted areas in Japan, we showed that a dose-response relationship existed between renal effects and Cd exposure in the general environment without any known Cd pollution. However, we could not estimate the threshold levels of urinary Cd at that time. In the present study, we estimated the threshold levels of urinary Cd as the benchmark dose low (BMDL) using the benchmark dose (BMD) approach. Urinary Cd excretion was divided into 10 categories, and an abnormality rate was calculated for each. Cut-off values for urinary substances were defined as corresponding to the 84% and 95% upper limit values of the target population who have not smoked. Then we calculated the BMD and BMDL using a log-logistic model. The values of BMD and BMDL for all urinary substances could be calculated. The BMDL for the 84% cut-off value of β 2 -MG, setting an abnormal value at 5%, was 2.4 μg/g creatinine (cr) in men and 3.3 μg/g cr in women. In conclusion, the present study demonstrated that the threshold level of urinary Cd could be estimated in people living in the general environment without any known Cd-pollution in Japan, and the value was inferred to be almost the same as that in Belgium, Sweden, and China

  2. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  3. Method for calculating individual equivalent doses and cumulative dose of population in the vicinity of nuclear power plant site

    International Nuclear Information System (INIS)

    Namestek, L.; Khorvat, D; Shvets, J.; Kunz, Eh.

    1976-01-01

    A method of calculating the doses of external and internal person irradiation in the nuclear power plant vicinity under conditions of normal operation and accident situations has been described. The main difference between the above method and methods used up to now is the use of a new antropomorphous representation of a human body model together with all the organs. The antropomorphous model of human body and its organs is determined as a set of simple solids, coordinates of disposistion of the solids, sizes, masses, densities and composition corresponding the genuine organs. The use of the Monte-Carlo method is the second difference. The results of the calculations according to the model suggested can be used for determination: a critical group of inhabitans under conditions of normal plant operation; groups of inhabitants most subjected to irradiation in the case of possible accident; a critical sector with a maximum collective dose in the case of an accident; a critical radioisotope favouring the greatest contribution to an individual equivalent dose; critical irradiation ways promoting a maximum contribution to individual equivalent doses; cumulative collective doses for the whole region or for a chosen part of the region permitting to estimate a population dose. The consequent method evoluation suggests the development of separate units of the calculationg program, critical application and the selection of input data of physical, plysiological and ecological character and improvement of the calculated program for the separate concrete events [ru

  4. A phantom based method for deriving typical patient doses from measurements of dose-area product on populations of patients

    International Nuclear Information System (INIS)

    Chapple, C.-L.; Broadhead, D.A.

    1995-01-01

    One of the chief sources of uncertainty in the comparison of patient dosimetry data is the influence of patient size on dose. Dose has been shown to relate closely to the equivalent diameter of the patient. This concept has been used to derive a prospective, phantom based method for determining size correction factors for measurements of dose-area product. The derivation of the size correction factor has been demonstrated mathematically, and the appropriate factor determined for a number of different X-ray sets. The use of phantom measurements enables the effect of patient size to be isolated from other factors influencing patient dose. The derived factors agree well with those determined retrospectively from patient dose survey data. Size correction factors have been applied to the results of a large scale patient dose survey, and this approach has been compared with the method of selecting patients according to their weight. For large samples of data, mean dose-area product values are independent of the analysis method used. The chief advantage of using size correction factors is that it allows all patient data to be included in a survey, whereas patient selection has been shown to exclude approximately half of all patients. (author)

  5. Leveraging long read sequencing from a single individual to provide a comprehensive resource for benchmarking variant calling methods.

    Science.gov (United States)

    Mu, John C; Tootoonchi Afshar, Pegah; Mohiyuddin, Marghoob; Chen, Xi; Li, Jian; Bani Asadi, Narges; Gerstein, Mark B; Wong, Wing H; Lam, Hugo Y K

    2015-09-28

    A high-confidence, comprehensive human variant set is critical in assessing accuracy of sequencing algorithms, which are crucial in precision medicine based on high-throughput sequencing. Although recent works have attempted to provide such a resource, they still do not encompass all major types of variants including structural variants (SVs). Thus, we leveraged the massive high-quality Sanger sequences from the HuRef genome to construct by far the most comprehensive gold set of a single individual, which was cross validated with deep Illumina sequencing, population datasets, and well-established algorithms. It was a necessary effort to completely reanalyze the HuRef genome as its previously published variants were mostly reported five years ago, suffering from compatibility, organization, and accuracy issues that prevent their direct use in benchmarking. Our extensive analysis and validation resulted in a gold set with high specificity and sensitivity. In contrast to the current gold sets of the NA12878 or HS1011 genomes, our gold set is the first that includes small variants, deletion SVs and insertion SVs up to a hundred thousand base-pairs. We demonstrate the utility of our HuRef gold set to benchmark several published SV detection tools.

  6. Statistics and the additive dose method in TL dating

    International Nuclear Information System (INIS)

    Scott, E.M.

    1988-01-01

    Estimation of the palaeo-dose and its associated error in thermoluminescence (TL) dating requires assumptions to be made concerning the error structures of both the TL signal and dose. In this paper, we consider the sensitivity of palaeo-dose estimation to different error structures and describe techniques for estimation of the resultant palaeo-dose errors. We also indicate how the validity of the assumptions may be verified. We have taken the approach that procedures for the analysis of a glow curve at a single temperature must first be proved prior to their inclusion in a full analysis of the glow curves over a series of temperatures. Thus, the paper deals mainly with the analysis of the univariate structure of glow curves, i.e. estimation of the palaeo-dose and error at each temperature, but in the final section of the paper, we discuss briefly the multivariate structure of the results including automatic detection of plateau regions. (author)

  7. Manual method for dose calculation in gynecologic brachytherapy; Metodo manual para o calculo de doses em braquiterapia ginecologica

    Energy Technology Data Exchange (ETDEWEB)

    Vianello, Elizabeth A.; Almeida, Carlos E. de [Instituto Nacional do Cancer, Rio de Janeiro, RJ (Brazil); Biaggio, Maria F. de [Universidade do Estado, Rio de Janeiro, RJ (Brazil)

    1998-09-01

    This paper describes a manual method for dose calculation in brachytherapy of gynecological tumors, which allows the calculation of the doses at any plane or point of clinical interest. This method uses basic principles of vectorial algebra and the simulating orthogonal films taken from the patient with the applicators and dummy sources in place. The results obtained with method were compared with the values calculated with the values calculated with the treatment planning system model Theraplan and the agreement was better than 5% in most cases. The critical points associated with the final accuracy of the proposed method is related to the quality of the image and the appropriate selection of the magnification factors. This method is strongly recommended to the radiation oncology centers where are no treatment planning systems available and the dose calculations are manually done. (author) 10 refs., 5 figs.

  8. Multicentre evaluation of a novel vaginal dose reporting method in 153 cervical cancer patients

    DEFF Research Database (Denmark)

    Westerveld, Henrike; de Leeuw, Astrid; Kirchheiner, Kathrin

    2016-01-01

    Background and purpose Recently, a vaginal dose reporting method for combined EBRT and BT in cervical cancer patients was proposed. The current study was to evaluate vaginal doses with this method in a multicentre setting, wherein different applicators, dose rates and protocols were used. Materia...

  9. Method of predicting the mean lung dose based on a patient's anatomy and dose-volume histograms

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka, Anna, E-mail: a.zawadzka@zfm.coi.pl [Medical Physics Department, Centre of Oncology, Maria Sklodowska-Curie Memorial Cancer Center, Warsaw (Poland); Nesteruk, Marta [Faculty of Physics, University of Warsaw, Warsaw (Poland); Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich (Switzerland); Brzozowska, Beata [Faculty of Physics, University of Warsaw, Warsaw (Poland); Kukołowicz, Paweł F. [Medical Physics Department, Centre of Oncology, Maria Sklodowska-Curie Memorial Cancer Center, Warsaw (Poland)

    2017-04-01

    The aim of this study was to propose a method to predict the minimum achievable mean lung dose (MLD) and corresponding dosimetric parameters for organs-at-risk (OAR) based on individual patient anatomy. For each patient, the dose for 36 equidistant individual multileaf collimator shaped fields in the treatment planning system (TPS) was calculated. Based on these dose matrices, the MLD for each patient was predicted by the homemade DosePredictor software in which the solution of linear equations was implemented. The software prediction results were validated based on 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans previously prepared for 16 patients with stage III non–small-cell lung cancer (NSCLC). For each patient, dosimetric parameters derived from plans and the results calculated by DosePredictor were compared. The MLD, the maximum dose to the spinal cord (D{sub max} {sub cord}) and the mean esophageal dose (MED) were analyzed. There was a strong correlation between the MLD calculated by the DosePredictor and those obtained in treatment plans regardless of the technique used. The correlation coefficient was 0.96 for both 3D-CRT and VMAT techniques. In a similar manner, MED correlations of 0.98 and 0.96 were obtained for 3D-CRT and VMAT plans, respectively. The maximum dose to the spinal cord was not predicted very well. The correlation coefficient was 0.30 and 0.61 for 3D-CRT and VMAT, respectively. The presented method allows us to predict the minimum MLD and corresponding dosimetric parameters to OARs without the necessity of plan preparation. The method can serve as a guide during the treatment planning process, for example, as initial constraints in VMAT optimization. It allows the probability of lung pneumonitis to be predicted.

  10. Current evaluation of dose rate calculation - analytical method

    International Nuclear Information System (INIS)

    Tello, Marcos; Vilhena, Marco Tulio

    1996-01-01

    The accuracy of the dose calculations based on pencil beam formulas such as Fokker-Plank equations and Fermi equations for charged particle transport are studied and a methodology to solve the Boltzmann transport equation is suggested

  11. Practical methods of dose reduction to the bladder wall

    International Nuclear Information System (INIS)

    Smith, E.M.; Warner, G.G.

    1976-01-01

    The radiation dose to the bladder wall following the administration of radionuclides to patients can be reduced by a factor between 25 percent and 75 percent when the effective half-life for the radioactivity entering the urine is two hours or less. A significant but smaller reduction in dose to the gonads may also be achieved in situations where the major fraction of the administered activity is rapidly excreted in the urine. This reduction in dose is achieved by ensuring that the patient has between 50 and 150 ml of urine in his bladder when the radioactivity is injected, and is encouraged to void between one and two hours after the activity has been administered. The interrelationship of voiding schedule, effective half-life, initial urine volume, and demand urination has been analyzed in these studies. In addition, the significance of the rate of urine production and volume of urine in the bladder on the radiation dose to the bladder is demonstrated

  12. Dose rate measuring device and dose rate measuring method using the same

    International Nuclear Information System (INIS)

    Urata, Megumu; Matsushita, Takashi; Hanazawa, Sadao; Konno, Takahiro; Chiba, Yoshinori; Yumitate, Tadahiro

    1998-01-01

    The device of the present invention comprises a scintillation fiber scope having a shape elongated in the direction of the height of a pressure vessel and emitting light by incident of radiation to detect radiation, a radioactivity measuring device for measuring a dose rate based on the detection of the fiber scope and a reel means for dispensing and taking up the fiber scope, and it constituted such that the dose rate of the pressure vessel and that of a shroud are determined independently. Then, when the taken out shroud is contained in an container, excessive shielding is not necessary, in addition, this device can reliably be inserted to or withdrawn from complicated places between the pressure vessel and the shroud, and further, the dose rate of the pressure vessel and that of the shroud can be measured approximately accurately even when the thickness of them is different greatly. (N.H.)

  13. Dose rate measuring device and dose rate measuring method using the same

    Energy Technology Data Exchange (ETDEWEB)

    Urata, Megumu; Matsushita, Takashi; Hanazawa, Sadao; Konno, Takahiro; Chiba, Yoshinori; Yumitate, Tadahiro

    1998-11-13

    The device of the present invention comprises a scintillation fiber scope having a shape elongated in the direction of the height of a pressure vessel and emitting light by incident of radiation to detect radiation, a radioactivity measuring device for measuring a dose rate based on the detection of the fiber scope and a reel means for dispensing and taking up the fiber scope, and it constituted such that the dose rate of the pressure vessel and that of a shroud are determined independently. Then, when the taken out shroud is contained in an container, excessive shielding is not necessary, in addition, this device can reliably be inserted to or withdrawn from complicated places between the pressure vessel and the shroud, and further, the dose rate of the pressure vessel and that of the shroud can be measured approximately accurately even when the thickness of them is different greatly. (N.H.)

  14. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  15. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Science.gov (United States)

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  16. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  17. Estimation of dose in irradiated chicken bone by ESR method

    International Nuclear Information System (INIS)

    Tanabe, Hiroko; Hougetu, Daisuke

    1998-01-01

    The author studied the conditions needed to routinely estimate the radiation dose in chicken bone by repeated re-irradiation and measuring ESR signals. Chicken meat containing bone was γ-irradiated at doses of up to 3kGy, accepted as the commercially used dose. The results show that points in sample preparation and ESR measurement are as follows: Both ends of bone are cut off and central part of compact bone is used for experiment. To obtain accurate ESR spectrum, marrow should be scraped out completely. Sample bone fragments of 1-2mm particle size and ca.100mg are recommended to obtain stable and maximum signal. In practice, by re-irradiating up to 5kGy and extrapolating data of the signal intensity to zero using linear regression analysis, radiation dose is estimated. For example, in one experiment, estimated doses of chicken bones initially irradiated at 3.0kGy, 1.0kGy, 0.50kGy and 0.25kGy were 3.4kGy, 1.3kGy, 0.81kGy and 0.57kGy. (author)

  18. Required doses for projection methods in X-ray diagnosis

    International Nuclear Information System (INIS)

    Hagemann, G.

    1992-01-01

    The ideal dose requirement has been stated by Cohen et al. (1981) by a formula basing on parallel beam, maximum quantum yield and Bucky grid effect depending on the signal to noise ratio and object contrast. This was checked by means of contrast detail diagrams measured at the hole phantom, and was additionally compared with measurement results obtained with acrylic glass phantoms. The optimal dose requirement is obtained by the maximum technically possible approach to the ideal requirement level. Examples are given, besides for x-ray equipment with Gd 2 O 2 S screen film systems for grid screen mammography, and new thoracic examination systems for mass screenings. Finally, a few values concerning the dose requirement or the analogous time required for fluorscent screening in angiography and interventional radiology, are stated, as well as for dentistry and paediatric x-ray diagnostics. (orig./HP) [de

  19. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care we...... document pronounced regional differences in adherence to guidelines and can help to identify gaps and direct target interventions. It may serve as a tool for assessment and benchmarking the clinical management of HIV-patients in any setting worldwide....

  20. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  1. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  2. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  3. Dose determination in irradiated chicken meat by ESR method

    International Nuclear Information System (INIS)

    Polat, M.

    1996-01-01

    In this work, the properties of the radicals produced in chicken bones have been investigated by ESR technique to determine the amount of dose applied to the chicken meats during the food irradiation. For this goal, the drumsticks from 6-8 weeks old chickens purchased from a local market were irradiated at dose levels of 0; 2; 4; 6; 8 and 10 kGy. Then, the ESR spectra of the powder samples prepared from the bones of the drumsticks have been investigated. Unirradiated chicken bones have been observed to show a weak ESR signal of single line character. CO-2 ionic radicals of axial symmetry with g=1.9973 and g=2.0025 were observed to be produced in irradiated samples which would give rise to a three peaks ESR spectrum. In addition, the signal intensities of the samples were found to depend linearly on the irradiation dose in the dose range of 0-10 kGy. The powder samples prepared from chicken leg bones cleaned from their meats and marrow and irradiated at dose levels of 1, 2, 3, 4, 5, 6, B, 10, 12,14, 16, 1B, 20 and 22 kGy were used to get the dose-response curve. It was found that this curve has biphasic character and that the dose yield was higher in the 12-1B kGy dose range and a decrease appears in this curve over 18 kGy. The radical produced in the bones were found to be the same whether the irradiation was performed after stripping the meat and removing the marrow from the bone or before the stripping. The ESR spectra of both irradiated and non irradiated samples were investigated in the temperature range of 100 K-450 K and changes in the ESR spectra of CO-2 radical have been studied. For non irradiated samples (controls). the signal intensities were found to decrease when the temperature was increased. The same investigation has been carried out for irradiated samples and it was concluded that the signal intensities relative to the peaks of the radical spectrum increase in the temperature range of 100 K-330 K, then they decrease over 330 K. The change in the

  4. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  5. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CF{sub SSDE}{sup organ}) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CF{sub SSDE}{sup organ} were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCF{sub SSDE}{sup organ} were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CF{sub SSDE}{sup organ}, was compared to

  6. Recommended environmental dose calculation methods and Hanford-specific parameters

    International Nuclear Information System (INIS)

    Schreckhise, R.G.; Rhoads, K.; Napier, B.A.; Ramsdell, J.V.; Davis, J.S.

    1993-03-01

    This document was developed to support the Hanford Environmental Dose overview Panel (HEDOP). The Panel is responsible for reviewing all assessments of potential doses received by humans and other biota resulting from the actual or possible environmental releases of radioactive and other hazardous materials from facilities and/or operations belonging to the US Department of Energy on the Hanford Site in south-central Washington. This document serves as a guide to be used for developing estimates of potential radiation doses, or other measures of risk or health impacts, to people and other biota in the environs on and around the Hanford Site. It provides information to develop technically sound estimates of exposure (i.e., potential or actual) to humans or other biotic receptors that could result from the environmental transport of potentially harmful materials that have been, or could be, released from Hanford operations or facilities. Parameter values and information that are specific to the Hanford environs as well as other supporting material are included in this document

  7. Recommended environmental dose calculation methods and Hanford-specific parameters

    Energy Technology Data Exchange (ETDEWEB)

    Schreckhise, R.G.; Rhoads, K.; Napier, B.A.; Ramsdell, J.V. (Pacific Northwest Lab., Richland, WA (United States)); Davis, J.S. (Westinghouse Hanford Co., Richland, WA (United States))

    1993-03-01

    This document was developed to support the Hanford Environmental Dose overview Panel (HEDOP). The Panel is responsible for reviewing all assessments of potential doses received by humans and other biota resulting from the actual or possible environmental releases of radioactive and other hazardous materials from facilities and/or operations belonging to the US Department of Energy on the Hanford Site in south-central Washington. This document serves as a guide to be used for developing estimates of potential radiation doses, or other measures of risk or health impacts, to people and other biota in the environs on and around the Hanford Site. It provides information to develop technically sound estimates of exposure (i.e., potential or actual) to humans or other biotic receptors that could result from the environmental transport of potentially harmful materials that have been, or could be, released from Hanford operations or facilities. Parameter values and information that are specific to the Hanford environs as well as other supporting material are included in this document.

  8. Dose analysis in Brjansk region during the restoration period of nuclear accident and effects of dose reduction methods in Chernobyl

    International Nuclear Information System (INIS)

    Ramzaev, V.; Kovalenko, V.; Krivonsov, S.

    1999-01-01

    The exposure pathways to the people in this area were analysed and some decontamination methods and techniques were explained. The spatial dose rate, whole-body dose and external exposure of four kinds of classes such as pensioner, jobless person, outdoor laborer, indoor laborer and child were measured. New whole-body counter used can decrease the effect of external dose on 661 keV γ-ray. The relation coefficient between the soil contamination level and the external exposure was 0.99, but that between the cesium 137 content in soil and the internal exposure was -0.2, showing no correlation. Main source of cesium 137 in body was milk from private cow in each village. The concentration of radioactive cesium of 40% milk samples were more than 370 Bq/l. More than 75% mushroom and strawberry showed 600 Bq/kg and over. Other foods indicated less cesium content than that of above foods. The decontamination methods of roof, garden, milk and improved manure of grass were carried out in Smajalch. The most effective method seemed to be the filtration of milk. Each method came into effect to reduce the average annual dose to 1 mSv until the next year. (S.Y.)

  9. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  10. Update on the Code Intercomparison and Benchmark for Muon Fluence and Absorbed Dose Induced by an 18 GeV Electron Beam After Massive Iron Shielding

    Energy Technology Data Exchange (ETDEWEB)

    Fasso, A. [SLAC; Ferrari, A. [CERN; Ferrari, A. [HZDR, Dresden; Mokhov, N. V. [Fermilab; Mueller, S. E. [HZDR, Dresden; Nelson, W. R. [SLAC; Roesler, S. [CERN; Sanami, t.; Striganov, S. I. [Fermilab; Versaci, R. [Unlisted, CZ

    2016-12-01

    In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, and with the SLAC data.

  11. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  12. A method for describing the doses delivered by transmission x-ray computed tomography

    International Nuclear Information System (INIS)

    Shope, T.B.; Gagne, R.M.; Johnson, G.C.

    1981-01-01

    A method for describing the absorbed dose delivered by x-ray transmission computed tomography (CT) is proposed which provides a means to characterize the dose resulting from CT procedures consisting of a series of adjacent scans. The dose descriptor chosen is the average dose at several locations in the imaged volume of the central scan of the series. It is shown that this average dose, as defined, for locations in the central scan of the series can be obtained from the integral of the dose profile perpendicular to the scan plane at these same locations for a single scan. This method for estimating the average dose from a CT procedure has been evaluated as a function of the number of scans in the multiple scan procedure and location in the dosimetry phantom using single scan dose profiles obtained from five different types of CT systems. For the higher dose regions in the phantoms, the multiple scan dose descriptor derived from the single scan dose profiles overestimates the multiple scan average dose by no more than 10%, provided the procedure consists of at least eight scans

  13. SU-E-T-86: A Systematic Method for GammaKnife SRS Fetal Dose Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Geneser, S; Paulsson, A; Sneed, P; Braunstein, S; Ma, L [UCSF Comprehensive Cancer Center, San Francisco, CA (United States)

    2015-06-15

    Purpose: Estimating fetal dose is critical to the decision-making process when radiation treatment is indicated during pregnancy. Fetal doses less than 5cGy confer no measurable non-cancer developmental risks but can produce a threefold increase in developing childhood cancer. In this study, we estimate fetal dose for a patient receiving Gamma Knife stereotactic radiosurgery (GKSRS) treatment and develop a method to estimate dose directly from plan details. Methods: A patient underwent GKSRS on a Perfexion unit for eight brain metastases (two infratentorial and one brainstem). Dose measurements were performed using a CC13, head phantom, and solid water. Superficial doses to the thyroid, sternum, and pelvis were measured using MOSFETs during treatment. Because the fetal dose was too low to accurately measure, we obtained measurements proximally to the isocenter, fitted to an exponential function, and extrapolated dose to the fundus of the uterus, uterine midpoint, and pubic synthesis for both the preliminary and delivered plans. Results: The R-squared fit for the delivered doses was 0.995. The estimated fetal doses for the 72 minute preliminary and 138 minute delivered plans range from 0.0014 to 0.028cGy and 0.07 to 0.38cGy, respectively. MOSFET readings during treatment were just above background for the thyroid and negligible for all inferior positions. The method for estimating fetal dose from plan shot information was within 0.2cGy of the measured values at 14cm cranial to the fetal location. Conclusion: Estimated fetal doses for both the preliminary and delivered plan were well below the 5cGy recommended limit. Due to Pefexion shielding, internal dose is primarily governed by attenuation and drops off exponentially. This is the first work that reports fetal dose for a GK Perfexion unit. Although multiple lesions were treated and the duration of treatment was long, the estimated fetal dose remained very low.

  14. SU-E-T-86: A Systematic Method for GammaKnife SRS Fetal Dose Estimation

    International Nuclear Information System (INIS)

    Geneser, S; Paulsson, A; Sneed, P; Braunstein, S; Ma, L

    2015-01-01

    Purpose: Estimating fetal dose is critical to the decision-making process when radiation treatment is indicated during pregnancy. Fetal doses less than 5cGy confer no measurable non-cancer developmental risks but can produce a threefold increase in developing childhood cancer. In this study, we estimate fetal dose for a patient receiving Gamma Knife stereotactic radiosurgery (GKSRS) treatment and develop a method to estimate dose directly from plan details. Methods: A patient underwent GKSRS on a Perfexion unit for eight brain metastases (two infratentorial and one brainstem). Dose measurements were performed using a CC13, head phantom, and solid water. Superficial doses to the thyroid, sternum, and pelvis were measured using MOSFETs during treatment. Because the fetal dose was too low to accurately measure, we obtained measurements proximally to the isocenter, fitted to an exponential function, and extrapolated dose to the fundus of the uterus, uterine midpoint, and pubic synthesis for both the preliminary and delivered plans. Results: The R-squared fit for the delivered doses was 0.995. The estimated fetal doses for the 72 minute preliminary and 138 minute delivered plans range from 0.0014 to 0.028cGy and 0.07 to 0.38cGy, respectively. MOSFET readings during treatment were just above background for the thyroid and negligible for all inferior positions. The method for estimating fetal dose from plan shot information was within 0.2cGy of the measured values at 14cm cranial to the fetal location. Conclusion: Estimated fetal doses for both the preliminary and delivered plan were well below the 5cGy recommended limit. Due to Pefexion shielding, internal dose is primarily governed by attenuation and drops off exponentially. This is the first work that reports fetal dose for a GK Perfexion unit. Although multiple lesions were treated and the duration of treatment was long, the estimated fetal dose remained very low

  15. A method to adjust radiation dose-response relationships for clinical risk factors

    DEFF Research Database (Denmark)

    Appelt, Ane Lindegaard; Vogelius, Ivan R

    2012-01-01

    Several clinical risk factors for radiation induced toxicity have been identified in the literature. Here, we present a method to quantify the effect of clinical risk factors on radiation dose-response curves and apply the method to adjust the dose-response for radiation pneumonitis for patients...

  16. Patient and staff dose optimisation in nuclear medicine diagnosis methods

    International Nuclear Information System (INIS)

    Marta Wasilewska-Radwanska; Katarzyna Natkaniec

    2007-01-01

    , control of detector uniformity. The test for rotating gamma camera additionally demands controlling precision of rotation and image system resolution. The radioisotope and chemical purity of the radiopharmaceuticals are controlled, too. The process of 99m Tc elution efficacity from 99 Mo-generator is tested and the contents of 99 Mo radioisotope in eluate is measured. The radioisotope diagnosis of brain, heart, thyroid, stomach, liver, kidney and bones as well as lymphoscintigraphy are performed. The procedure used for patient and staff's dose optimisation consists of: 1) control dose measurement performed with dosemeter on the tissue-like phantom including selected radiopharmaceutical of the same radioactivity as the one which will be applied to patient, 2) calculation of the patient dose rate, 3) calculation of the staff dose based on the results of personnel dosemeters (films or TLD), 4) preparation of the Quality Assurance instruction for the staff responsible for patient's safety. Independently of the patient and staff dose optimisation, the Quality Control of gamma camera equipments e.g. SPECT X-Ring Nucline (MEDISO) is checked for uniformity of the image from a radiopharmaceutical sample and center of rotation according to the producer's manual instruction. In addition, special lectures and courses for staff are organized several times per year to ensure a Continuous Professional Development (CPD) in the field of Quality Assurance and Quality Control.

  17. Method for simulating dose reduction in digital mammography using the Anscombe transformation.

    Science.gov (United States)

    Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C

    2016-06-01

    This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise

  18. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  19. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  20. SU-E-T-148: Benchmarks and Pre-Treatment Reviews: A Study of Quality Assurance Effectiveness

    International Nuclear Information System (INIS)

    Lowenstein, J; Nguyen, H; Roll, J; Walsh, A; Tailor, A; Followill, D

    2015-01-01

    Purpose: To determine the impact benchmarks and pre-treatment reviews have on improving the quality of submitted clinical trial data. Methods: Benchmarks are used to evaluate a site’s ability to develop a treatment that meets a specific protocol’s treatment guidelines prior to placing their first patient on the protocol. A pre-treatment review is an actual patient placed on the protocol in which the dosimetry and contour volumes are evaluated to be per protocol guidelines prior to allowing the beginning of the treatment. A key component of these QA mechanisms is that sites are provided timely feedback to educate them on how to plan per the protocol and prevent protocol deviations on patients accrued to a protocol. For both benchmarks and pre-treatment reviews a dose volume analysis (DVA) was performed using MIM softwareTM. For pre-treatment reviews a volume contour evaluation was also performed. Results: IROC Houston performed a QA effectiveness analysis of a protocol which required both benchmarks and pre-treatment reviews. In 70 percent of the patient cases submitted, the benchmark played an effective role in assuring that the pre-treatment review of the cases met protocol requirements. The 35 percent of sites failing the benchmark subsequently modified there planning technique to pass the benchmark before being allowed to submit a patient for pre-treatment review. However, in 30 percent of the submitted cases the pre-treatment review failed where the majority (71 percent) failed the DVA. 20 percent of sites submitting patients failed to correct their dose volume discrepancies indicated by the benchmark case. Conclusion: Benchmark cases and pre-treatment reviews can be an effective QA tool to educate sites on protocol guidelines and to minimize deviations. Without the benchmark cases it is possible that 65 percent of the cases undergoing a pre-treatment review would have failed to meet the protocols requirements.Support: U24-CA-180803

  1. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  2. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  3. Evaluation of radiation dose to patients in intraoral dental radiography using Monte Carlo Method

    International Nuclear Information System (INIS)

    Park, Il; Kim, Kyeong Ho; Oh, Seung Chul; Song, Ji Young

    2016-01-01

    The use of dental radiographic examinations is common although radiation dose resulting from the dental radiography is relatively small. Therefore, it is required to evaluate radiation dose from the dental radiography for radiation safety purpose. The objectives of the present study were to develop dosimetry method for intraoral dental radiography using a Monte Carlo method based radiation transport code and to calculate organ doses and effective doses of patients from different types of intraoral radiographies. Radiological properties of dental radiography equipment were characterized for the evaluation of patient radiation dose. The properties including x-ray energy spectrum were simulated using MCNP code. Organ doses and effective doses to patients were calculated by MCNP simulation with computational adult phantoms. At the typical equipment settings (60 kVp, 7 mA, and 0.12 sec), the entrance air kerma was 1.79 mGy and the measured half value layer was 1.82 mm. The half value layer calculated by MCNP simulation was well agreed with the measurement values. Effective doses from intraoral radiographies ranged from 1 μSv for maxilla premolar to 3 μSv for maxilla incisor. Oral cavity layer (23⁓82 μSv) and salivary glands (10⁓68 μSv) received relatively high radiation dose. Thyroid also received high radiation dose (3⁓47 μSv) for examinations. The developed dosimetry method and evaluated radiation doses in this study can be utilized for policy making, patient dose management, and development of low-dose equipment. In addition, this study can ultimately contribute to decrease radiation dose to patients for radiation safety

  4. Evaluation of radiation dose to patients in intraoral dental radiography using Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Park, Il; Kim, Kyeong Ho; Oh, Seung Chul; Song, Ji Young [Dept. of Nuclear Engineering, Kyung Hee University, Yongin (Korea, Republic of)

    2016-11-15

    The use of dental radiographic examinations is common although radiation dose resulting from the dental radiography is relatively small. Therefore, it is required to evaluate radiation dose from the dental radiography for radiation safety purpose. The objectives of the present study were to develop dosimetry method for intraoral dental radiography using a Monte Carlo method based radiation transport code and to calculate organ doses and effective doses of patients from different types of intraoral radiographies. Radiological properties of dental radiography equipment were characterized for the evaluation of patient radiation dose. The properties including x-ray energy spectrum were simulated using MCNP code. Organ doses and effective doses to patients were calculated by MCNP simulation with computational adult phantoms. At the typical equipment settings (60 kVp, 7 mA, and 0.12 sec), the entrance air kerma was 1.79 mGy and the measured half value layer was 1.82 mm. The half value layer calculated by MCNP simulation was well agreed with the measurement values. Effective doses from intraoral radiographies ranged from 1 μSv for maxilla premolar to 3 μSv for maxilla incisor. Oral cavity layer (23⁓82 μSv) and salivary glands (10⁓68 μSv) received relatively high radiation dose. Thyroid also received high radiation dose (3⁓47 μSv) for examinations. The developed dosimetry method and evaluated radiation doses in this study can be utilized for policy making, patient dose management, and development of low-dose equipment. In addition, this study can ultimately contribute to decrease radiation dose to patients for radiation safety.

  5. Modeling of tube current modulation methods in computed tomography dose calculations for adult and pregnant patients

    International Nuclear Information System (INIS)

    Caracappa, Peter F.; Xu, X. George; Gu, Jianwei

    2011-01-01

    The comparatively high dose and increasing frequency of computed tomography (CT) examinations have spurred the development of techniques for reducing radiation dose to imaging patients. Among these is the application of tube current modulation (TCM), which can be applied either longitudinally along the body or rotationally along the body, or both. Existing computational models for calculating dose from CT examinations do not include TCM techniques. Dose calculations using Monte Carlo methods have been previously prepared for constant-current rotational exposures at various positions along the body and for the principle exposure projections for several sets of computational phantoms, including adult male and female and pregnant patients. Dose calculations from CT scans with TCM are prepared by appropriately weighting the existing dose data. Longitudinal TCM doses can be obtained by weighting the dose at the z-axis scan position by the relative tube current at that position. Rotational TCM doses are weighted using the relative organ doses from the principle projections as a function of the current at the rotational angle. Significant dose reductions of 15% to 25% to fetal tissues are found from simulations of longitudinal TCM schemes to pregnant patients of different gestational ages. Weighting factors for each organ in rotational TCM schemes applied to adult male and female patients have also been found. As the application of TCM techniques becomes more prevalent, the need for including TCM in CT dose estimates will necessarily increase. (author)

  6. Design study on dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka; Irie, Takashi; Kohriyama, Tamio; Kudo, Seiichi; Nishimura, Kazuya

    2001-01-01

    When we assume a severe accident in a nuclear power plant, it is required for rescue activity in the plant, accident management, repair work of failed parts and evaluation of employees to obtain radiation dose rate distribution or map in the plant and estimated dose value for the above works. However it might be difficult to obtain them accurately along the progress of the accident, because radiation monitors are not always installed in the areas where the accident management is planned or the repair work is thought for safety-related equipments. In this work, we analyzed diffusion of radioactive materials in case of a severe accident in a pressurized water reactor plant, investigated a method to obtain radiation dose rate in the plant from estimated radioactive sources, made up a prototype analyzing system by modeling a specific part of components and buildings in the plant from this design study on dose evaluation method for employees at severe accident, and then evaluated its availability. As a result, we obtained the followings: (1) A new dose evaluation method was established to predict the radiation dose rate in any point in the plant during a severe accident scenario. (2) This evaluation of total dose including moving route and time for the accident management and the repair work is useful for estimating radiation dose limit for these actions of the employees. (3) The radiation dose rate map is effective for identifying high radiation areas and for choosing a route with lower radiation dose rate. (author)

  7. Design study on dose evaluation method for employees at severe accident

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Yoshitaka; Irie, Takashi; Kohriyama, Tamio [Institute of Nuclear Safety Systems Inc., Mihama, Fukui (Japan); Kudo, Seiichi [Mitsubishi Heavy Industries Ltd., Tokyo (Japan); Nishimura, Kazuya [Computer Software Development Co., Ltd., Tokyo (Japan)

    2001-09-01

    When we assume a severe accident in a nuclear power plant, it is required for rescue activity in the plant, accident management, repair work of failed parts and evaluation of employees to obtain radiation dose rate distribution or map in the plant and estimated dose value for the above works. However it might be difficult to obtain them accurately along the progress of the accident, because radiation monitors are not always installed in the areas where the accident management is planned or the repair work is thought for safety-related equipments. In this work, we analyzed diffusion of radioactive materials in case of a severe accident in a pressurized water reactor plant, investigated a method to obtain radiation dose rate in the plant from estimated radioactive sources, made up a prototype analyzing system by modeling a specific part of components and buildings in the plant from this design study on dose evaluation method for employees at severe accident, and then evaluated its availability. As a result, we obtained the followings: (1) A new dose evaluation method was established to predict the radiation dose rate in any point in the plant during a severe accident scenario. (2) This evaluation of total dose including moving route and time for the accident management and the repair work is useful for estimating radiation dose limit for these actions of the employees. (3) The radiation dose rate map is effective for identifying high radiation areas and for choosing a route with lower radiation dose rate. (author)

  8. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  9. Benchmarking, Total Quality Management, and Libraries.

    Science.gov (United States)

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  10. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  11. Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF

    International Nuclear Information System (INIS)

    Kurnia, E; Oetami, H R; Mutiah

    1996-01-01

    Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range

  12. Dose evaluation due to electron spin resonance method

    International Nuclear Information System (INIS)

    Nakajima, Toshiyuki

    1989-01-01

    Radiation dosimeter has been developed with free radical created in sucrose. Free radical was observed with using the electron spin resonance (ESR) equipment. The ESR absorption due to free radical in sucrose appeared at the magnetic field between the third and fourth ESR ones of Mn +2 standard sample. Sucrose as radiation dosimeter can linearly measure the dose from 5 x 10 -3 Gy to 10 5 Gy. If the new model of the ESR equipment is used and ESR observation is carried out at lower temperature such as liquid nitrogen or liquid helium temperature, the sucrose ESR dosimeter will be detectable about 5 x 10 -4 Gy or less. Fading of the free radicals in the irradiated sucrose was scarcely obtained about six months after irradiation and in the irradiated sucrose stored at 55deg C and 100deg C for one hour or more also scarcely observed. It is concluded from these radiation property that sucrose is useful for the accidental or emergency dosimeter for the inhabitants. (author)

  13. A method of estimating conceptus doses resulting from multidetector CT examinations during all stages of gestation

    International Nuclear Information System (INIS)

    Damilakis, John; Tzedakis, Antonis; Perisinakis, Kostas; Papadakis, Antonios E.

    2010-01-01

    Purpose: Current methods for the estimation of conceptus dose from multidetector CT (MDCT) examinations performed on the mother provide dose data for typical protocols with a fixed scan length. However, modified low-dose imaging protocols are frequently used during pregnancy. The purpose of the current study was to develop a method for the estimation of conceptus dose from any MDCT examination of the trunk performed during all stages of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study to model the Siemens Sensation 16 and Sensation 64 MDCT scanners. Four mathematical phantoms were used, simulating women at 0, 3, 6, and 9 months of gestation. The contribution to the conceptus dose from single simulated scans was obtained at various positions across the phantoms. To investigate the effect of maternal body size and conceptus depth on conceptus dose, phantoms of different sizes were produced by adding layers of adipose tissue around the trunk of the mathematical phantoms. To verify MCNP results, conceptus dose measurements were carried out by means of three physical anthropomorphic phantoms, simulating pregnancy at 0, 3, and 6 months of gestation and thermoluminescence dosimetry (TLD) crystals. Results: The results consist of Monte Carlo-generated normalized conceptus dose coefficients for single scans across the four mathematical phantoms. These coefficients were defined as the conceptus dose contribution from a single scan divided by the CTDI free-in-air measured with identical scanning parameters. Data have been produced to take into account the effect of maternal body size and conceptus position variations on conceptus dose. Conceptus doses measured with TLD crystals showed a difference of up to 19% compared to those estimated by mathematical simulations. Conclusions: Estimation of conceptus doses from MDCT examinations of the trunk performed on pregnant patients during all stages of gestation can be made

  14. Evaluation of a post-analysis method for cumulative dose distribution in stereotactic body radiotherapy

    International Nuclear Information System (INIS)

    Imae, Toshikazu; Takenaka, Shigeharu; Saotome, Naoya

    2016-01-01

    The purpose of this study was to evaluate a post-analysis method for cumulative dose distribution in stereotactic body radiotherapy (SBRT) using volumetric modulated arc therapy (VMAT). VMAT is capable of acquiring respiratory signals derived from projection images and machine parameters based on machine logs during VMAT delivery. Dose distributions were reconstructed from the respiratory signals and machine parameters in the condition where respiratory signals were without division, divided into 4 and 10 phases. The dose distribution of each respiratory phase was calculated on the planned four-dimensional CT (4DCT). Summation of the dose distributions was carried out using deformable image registration (DIR), and cumulative dose distributions were compared with those of the corresponding plans. Without division, dose differences between cumulative distribution and plan were not significant. In the condition Where respiratory signals were divided, dose differences were observed over dose in cranial region and under dose in caudal region of planning target volume (PTV). Differences between 4 and 10 phases were not significant. The present method Was feasible for evaluating cumulative dose distribution in VMAT-SBRT using 4DCT and DIR. (author)

  15. Study on the method or reducing the operator's exposure dose from a C-Arm system

    International Nuclear Information System (INIS)

    Kim, Ki Sik; Song, Jong Nam; Kim, Seung Ok

    2016-01-01

    In this study, C-Arm equipment is being used as we intend to verify the exposure dose on the operator by the scattering rays during the operation of the C-Arm equipment and to provide an effective method of reducing the exposure dose. Exposure dose is less than the Over Tube method utilizes the C-arm equipment Under Tube the scheme, The result showed that the exposure dose on the operator decreased with a thicker shield, and as the operator moved away from the center line. Moreover, as the research time prolongated, the exposure dose increased, and among the three affixed location of the dosimeter, the most exposure dose was measured at gonadal, then followed by chest and thyroid. However, in consideration of the relationship between the operator and the patient, the distance cannot be increased infinitely and the research time cannot be decreased infinitely in order to reduce the exposure dose. Therefore, by changing the thickness of the radiation shield, the exposure dose on the operator was able to be reduced. If you are using a C-Arm equipment discomfort during surgery because the grounds that the procedure is neglected and close to the dose of radiation shielding made can only increase. Because a separate control room cannot be used for the C-Arm equipment due to its characteristic, the exposure dose on the operator needs to be reduced by reinforcing the shield through an appropriate thickness of radiation shield devices, such as apron, etc. during a treatment

  16. A new method for dosing rhodamine B in natural water

    International Nuclear Information System (INIS)

    Marichal, M.; Benoit, R.

    1961-01-01

    A simple and sensitive method well adapted to hydrological research. The dye is first extracted from the water sample by isoamyl alcohol and then the fluorescence of the alcoholic solution, after excitation by ultraviolet radiation, is measured spectrophotometrically. The sensitivity of the method is about 10 -12 , that is, a millionth of a milligram of dye per litre. Reprint of a paper published in 'Chimie Analytique', N. 2, Feb 1962, p. 70-72 [fr

  17. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  18. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Energy Technology Data Exchange (ETDEWEB)

    Rohée, E. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Coulon, R., E-mail: romain.coulon@cea.fr [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Carrel, F. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Dautremer, T.; Barat, E.; Montagu, T. [CEA, LIST, Laboratoire de Modélisation et Simulation des Systèmes, F-91191 Gif-sur-Yvette (France); Normand, S. [CEA, DAM, Le Ponant, DPN/STXN, F-75015 Paris (France); Jammes, C. [CEA, DEN, Cadarache, DER/SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France)

    2016-11-11

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  19. Accuracy of effective dose estimation in personal dosimetry: a comparison between single-badge and double-badge methods and the MOSFET method.

    Science.gov (United States)

    Januzis, Natalie; Belley, Matthew D; Nguyen, Giao; Toncheva, Greta; Lowry, Carolyn; Miller, Michael J; Smith, Tony P; Yoshizumi, Terry T

    2014-05-01

    The purpose of this study was three-fold: (1) to measure the transmission properties of various lead shielding materials, (2) to benchmark the accuracy of commercial film badge readings, and (3) to compare the accuracy of effective dose (ED) conversion factors (CF) of the U.S. Nuclear Regulatory Commission methods to the MOSFET method. The transmission properties of lead aprons and the accuracy of film badges were studied using an ion chamber and monitor. ED was determined using an adult male anthropomorphic phantom that was loaded with 20 diagnostic MOSFET detectors and scanned with a whole body CT protocol at 80, 100, and 120 kVp. One commercial film badge was placed at the collar and one at the waist. Individual organ doses and waist badge readings were corrected for lead apron attenuation. ED was computed using ICRP 103 tissue weighting factors, and ED CFs were calculated by taking the ratio of ED and badge reading. The measured single badge CFs were 0.01 (±14.9%), 0.02 (±9.49%), and 0.04 (±15.7%) for 80, 100, and 120 kVp, respectively. Current regulatory ED CF for the single badge method is 0.3; for the double-badge system, they are 0.04 (collar) and 1.5 (under lead apron at the waist). The double-badge system provides a better coefficient for the collar at 0.04; however, exposure readings under the apron are usually negligible to zero. Based on these findings, the authors recommend the use of ED CF of 0.01 for the single badge system from 80 kVp (effective energy 50.4 keV) data.

  20. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  1. A novel method for the evaluation of uncertainty in dose-volume histogram computation.

    Science.gov (United States)

    Henríquez, Francisco Cutanda; Castrillón, Silvia Vargas

    2008-03-15

    Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger.

  2. The MSRC Ab Initio Methods Benchmark Suite: A measurement of hardware and software performance in the area of electronic structure methods

    Energy Technology Data Exchange (ETDEWEB)

    Feller, D.F.

    1993-07-01

    This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory`s Molecular Science Research Center in late 1992 and early 1993. The ``snapshot`` nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.

  3. ICSBEP-2007, International Criticality Safety Benchmark Experiment Handbook

    International Nuclear Information System (INIS)

    Blair Briggs, J.

    2007-01-01

    1 - Description: The Critically Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United Sates Department of Energy. The project quickly became an international effort as scientist from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization of Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA). This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material. The example calculations presented do not constitute a validation of the codes or cross section data. The work of the ICSBEP is documented as an International Handbook of Evaluated Criticality Safety Benchmark Experiments. Currently, the handbook spans over 42,000 pages and contains 464 evaluations representing 4,092 critical, near-critical, or subcritical configurations and 21 criticality alarm placement/shielding configurations with multiple dose points for each and 46 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The handbook is intended for use by criticality safety analysts to perform necessary validations of their calculational techniques and is expected to be a valuable tool for decades to come. The ICSBEP Handbook is available on DVD. You may request a DVD by completing the DVD Request Form on the internet. Access to the Handbook on the Internet requires a password. You may request a password by completing the Password Request Form. The Web address is: http://icsbep.inel.gov/handbook.shtml 2 - Method of solution: Experiments that are found

  4. Comparing the accuracy of high-dimensional neural network potentials and the systematic molecular fragmentation method: A benchmark study for all-trans alkanes

    International Nuclear Information System (INIS)

    Gastegger, Michael; Kauffmann, Clemens; Marquetand, Philipp; Behler, Jörg

    2016-01-01

    Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system’s total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy is constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.

  5. Comparing the accuracy of high-dimensional neural network potentials and the systematic molecular fragmentation method: A benchmark study for all-trans alkanes

    Energy Technology Data Exchange (ETDEWEB)

    Gastegger, Michael; Kauffmann, Clemens; Marquetand, Philipp, E-mail: philipp.marquetand@univie.ac.at [Institute of Theoretical Chemistry, Faculty of Chemistry, University of Vienna, Währinger Straße 17, Vienna (Austria); Behler, Jörg [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, Universitätsstraße 150, Bochum (Germany)

    2016-05-21

    Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system’s total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy is constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.

  6. A fast dose calculation method based on table lookup for IMRT optimization

    International Nuclear Information System (INIS)

    Wu Qiuwen; Djajaputra, David; Lauterbach, Marc; Wu Yan; Mohan, Radhe

    2003-01-01

    This note describes a fast dose calculation method that can be used to speed up the optimization process in intensity-modulated radiotherapy (IMRT). Most iterative optimization algorithms in IMRT require a large number of dose calculations to achieve convergence and therefore the total amount of time needed for the IMRT planning can be substantially reduced by using a faster dose calculation method. The method that is described in this note relies on an accurate dose calculation engine that is used to calculate an approximate dose kernel for each beam used in the treatment plan. Once the kernel is computed and saved, subsequent dose calculations can be done rapidly by looking up this kernel. Inaccuracies due to the approximate nature of the kernel in this method can be reduced by performing scheduled kernel updates. This fast dose calculation method can be performed more than two orders of magnitude faster than the typical superposition/convolution methods and therefore is suitable for applications in which speed is critical, e.g., in an IMRT optimization that requires a simulated annealing optimization algorithm or in a practical IMRT beam-angle optimization system. (note)

  7. Study on the evaluation method of radiation dose rate around spent fuel shipping casks

    International Nuclear Information System (INIS)

    Yamakoshi, Hisao

    1986-01-01

    This study aims at developing a simple calculation method which can evaluate radiation dose rate around casks with high accuracy in a short time. The method is based on a concept of the radiation shielding characteristics of cask walls. The concept was introduced to replace for ordinary radiation shielding calculation which requires a long calculation time and a large memory capacity of a computer in the matrix calculation. For the purpose of verifying the accuracy and reliability of the new method, it was applied to the analysis of the dose rate distribution around actual casks, which had been measured. The results of the analysis revealed that the newly proposed method was excellent for the forecast of radiation dose rate distribution around casks in view of the accuracy and calculation time. The short calculation time and high accuracy by the proposed method were attained by dividing the whole procedure of ordinary fine radiation shielding calculation into the calculation of radiation dose rate on a cask surface by the matrix expression of the characteristic function and the calculation of dose rate distribution using the simple analytical expression of dose rate distribution around casks. The effect of the heterogeneous array of spent fuel in different burnup state on dose rate distribution around casks was evaluated by this method. (Kako, I.)

  8. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  9. Benchmarking Academic Anatomic Pathologists

    Directory of Open Access Journals (Sweden)

    Barbara S. Ducatman MD

    2016-10-01

    Full Text Available The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above. The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative

  10. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Randriantsizafy, R D; Ramanandraibe, M J [Madagascar Institut National des Sciences et Techniques Nucleaires, Antananarivo (Madagascar); Raboanary, R [Institut of astro and High-Energy Physics Madagascar, University of Antananarivo, Antananarivo (Madagascar)

    2007-07-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  11. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.

    2007-01-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  12. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  13. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases:

  14. Design study on dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka; Irie, Takashi; Kohriyama, Tamio; Kudo, Seiichi; Nishimura, Kazuya

    2002-01-01

    If a severe accident occurs in a pressurized water reactor plant, it is required to estimate dose values of operators engaged in emergency such as accident management, repair of failed parts. However, it might be difficult to measure radiation dose rate during the progress of an accident, because radiation monitors are not always installed in areas where the emergency activities are required. In this study, we analyzed the transport of radioactive materials in case of a severe accident, investigated a method to obtain radiation dose rate in the plant from estimated radioactive sources, made up a prototype analyzing system from this design study, and then evaluated its availability. As a result, we obtained the following: (1) A new dose evaluation method was established to predict the radiation dose rate at any point in the plant during a severe accident scenario. (2) This evaluation of total dose including access route and time for emergency activities is useful for estimating radiation dose limit for these employee actions. (3) The radiation dose rate map is effective for identifying high radiation areas and for choosing a route with lower radiation dose rate. (author)

  15. Maximum skin dose assessment in interventional cardiology: large area detectors and calculation methods

    International Nuclear Information System (INIS)

    Quail, E.; Petersol, A.

    2002-01-01

    Advances in imaging technology have facilitated the development of increasingly complex radiological procedures for interventional radiology. Such interventional procedures can involve significant patient exposure, although often represent alternatives to more hazardous surgery or are the sole method for treatment. Interventional radiology is already an established part of mainstream medicine and is likely to expand further with the continuing development and adoption of new procedures. Between all medical exposures, interventional radiology is first of the list of the more expansive radiological practice in terms of effective dose per examination with a mean value of 20 mSv. Currently interventional radiology contribute 4% to the annual collective dose, in spite of contributing to total annual frequency only 0.3% but considering the perspectives of this method can be expected a large expansion of this value. In IR procedures the potential for deterministic effects on the skin is a risk to be taken into account together with stochastic long term risk. Indeed, the International Commission on Radiological Protection (ICRP) in its publication No 85, affirms that the patient dose of priority concern is the absorbed dose in the area of skin that receives the maximum dose during an interventional procedure. For the mentioned reasons, in IR it is important to give to practitioners information on the dose received by the skin of the patient during the procedure. In this paper maximum local skin dose (MSD) is called the absorbed dose in the area of skin receiving the maximum dose during an interventional procedure

  16. A study on measurement on artificial radiation dose rate using the response matrix method

    International Nuclear Information System (INIS)

    Kidachi, Hiroshi; Ishikawa, Yoichi; Konno, Tatsuya

    2004-01-01

    We examined accuracy and stability of estimated artificial dose contribution which is distinguished from natural background gamma-ray dose rate using Response Matrix method. Irradiation experiments using artificial gamma-ray sources indicated that there was a linear relationship between observed dose rate and estimated artificial dose contribution, when irradiated artificial gamma-ray dose rate was higher than about 2 nGy/h. Statistical and time-series analyses of long term data made it clear that estimated artificial contribution showed almost constant values under no artificial influence from the nuclear power plants. However, variations of estimated artificial dose contribution were infrequently observed due to of rainfall, detector maintenance operation and occurrence of calibration error. Some considerations on the factors to these variations were made. (author)

  17. A new method to assess the gonadal doses in women during radiation treatment

    International Nuclear Information System (INIS)

    Agrawal, M.S.; Pant, G.C.

    1977-01-01

    The relative inaccessibility of the ovaries renders direct measurement of the gonadal doses difficult. A relatively simple method is described to tackle this problem - using the upper margin of the public symphysis as a reference point. Measurement of Radiation doses were done in a Masonite human phantom using T.L.D. and a Co-60 teletherapy unit. The accompanying figures document the observations made. The distance between the lower edge of the treatment port and the reference point is denoted by 'd'. First figure relates observed ratios of the radiation doses at the ovary and the reference point to 'd' for various port sizes and the second figure shows the relationship between the area of the port and the dose ratio (ovary: reference-point) for various values of 'd'. The advantage of this documentation is that it serves as a 'Ready Reckoner' to assess the ovarian doses under different treatment situations-once the doses at the reference point is measured

  18. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  19. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  20. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  1. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  2. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  3. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  4. Dose calculation method with 60-cobalt gamma rays in total body irradiation

    International Nuclear Information System (INIS)

    Scaff, Luiz Alberto Malaguti

    2001-01-01

    Physical factors associated to total body irradiation using 60 Co gamma rays beams, were studied in order to develop a calculation method of the dose distribution that could be reproduced in any radiotherapy center with good precision. The method is based on considering total body irradiation as a large and irregular field with heterogeneities. To calculate doses, or doses rates, of each area of interest (head, thorax, thigh, etc.), scattered radiation is determined. It was observed that if dismagnified fields were considered to calculate the scattered radiation, the resulting values could be applied on a projection to the real size to obtain the values for dose rate calculations. In a parallel work it was determined the variation of the dose rate in the air, for the distance of treatment, and for points out of the central axis. This confirm that the use of the inverse square law is not valid. An attenuation curve for a broad beam was also determined in order to allow the use of absorbers. In this work all the adapted formulas for dose rate calculations in several areas of the body are described, as well time/dose templates sheets for total body irradiation. The in vivo dosimetry, proved that either experimental or calculated dose rate values (achieved by the proposed method), did not have significant discrepancies. (author)

  5. SU-G-BRC-08: Evaluation of Dose Mass Histogram as a More Representative Dose Description Method Than Dose Volume Histogram in Lung Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Liu, J; Eldib, A; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States); Lin, M [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States); Li, J [Cyber Medical Inc, Xian, Shaanxi (China); Mora, G [Universidade de Lisboa, Codex, Lisboa (Portugal)

    2016-06-15

    Purpose: Dose-volume-histogram (DVH) is widely used for plan evaluation in radiation treatment. The concept of dose-mass-histogram (DMH) is expected to provide a more representative description as it accounts for heterogeneity in tissue density. This study is intended to assess the difference between DVH and DMH for evaluating treatment planning quality. Methods: 12 lung cancer treatment plans were exported from the treatment planning system. DVHs for the planning target volume (PTV), the normal lung and other structures of interest were calculated. DMHs were calculated in a similar way as DVHs expect that the voxel density converted from the CT number was used in tallying the dose histogram bins. The equivalent uniform dose (EUD) was calculated based on voxel volume and mass, respectively. The normal tissue complication probability (NTCP) in relation to the EUD was calculated for the normal lung to provide quantitative comparison of DVHs and DMHs for evaluating the radiobiological effect. Results: Large differences were observed between DVHs and DMHs for lungs and PTVs. For PTVs with dense tumor cores, DMHs are higher than DVHs due to larger mass weighing in the high dose conformal core regions. For the normal lungs, DMHs can either be higher or lower than DVHs depending on the target location within the lung. When the target is close to the lower lung, DMHs show higher values than DVHs because the lower lung has higher density than the central portion or the upper lung. DMHs are lower than DVHs for targets in the upper lung. The calculated NTCPs showed a large range of difference between DVHs and DMHs. Conclusion: The heterogeneity of lung can be well considered using DMH for evaluating target coverage and normal lung pneumonitis. Further studies are warranted to quantify the benefits of DMH over DVH for plan quality evaluation.

  6. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  7. Noninvasive non Doses Method for Risk Stratification of Breast Diseases

    Directory of Open Access Journals (Sweden)

    I. A. Apollonova

    2014-01-01

    Full Text Available The article concerns a relevant issue that is a development of noninvasive method for screening diagnostics and risk stratification of breast diseases. The developed method and its embodiment use both the analysis of onco-epidemiologic tests and the iridoglyphical research.Widely used onco-epidemiologic tests only reflect the patient’s subjective perception of her own life history and sickness. Therefore to confirm the revealed factors, modern objective and safe methods are necessary.Iridoglyphical research may be considered as one of those methods, since it allows us to reveal changes in iris’ zones in real time. As these zones are functionally linked with intern organs and systems, in this case mammary glands, changes of iris’ zones may be used to assess risk groups for mammary gland disorders.The article presents results of research conducted using a prototype of the hardwaresoftware complex to provide screening diagnostics and risk stratification of mammary gland disorders.Research has been conducted using verified materials, provided by the Biomedical Engineering Faculty and the Scientific Biometry Research and Development Centre of Bauman Moscow State Technical University, the City of Moscow’s GUZ Clinical and Diagnostic Centre N°4 of the Western Administrative District and the First Mammology (Breast Care Centre of the Russian Federation’s Ministry of Health and Social Development.The information, obtained as a result of onco-epidemiological tests and iridoglyphical research, was used to develop a procedure of quantitative diagnostics aimed to assess mammary gland cancer risk groups. The procedure is based on Bayes conditional probability.The task of quantitative diagnostics may be formally divided into the differential assessment of three states. The first, D1, is the norm, which corresponds to the population group with a lack of risk factors or changes of the mammary glands. The second, D2, is the population group

  8. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  9. Benchmarking Quantum Mechanics/Molecular Mechanics (QM/MM) Methods on the Thymidylate Synthase-Catalyzed Hydride Transfer.

    Science.gov (United States)

    Świderek, Katarzyna; Arafet, Kemel; Kohen, Amnon; Moliner, Vicent

    2017-03-14

    Given the ubiquity of hydride-transfer reactions in enzyme-catalyzed processes, identifying the appropriate computational method for evaluating such biological reactions is crucial to perform theoretical studies of these processes. In this paper, the hydride-transfer step catalyzed by thymidylate synthase (TSase) is studied by examining hybrid quantum mechanics/molecular mechanics (QM/MM) potentials via multiple semiempirical methods and the M06-2X hybrid density functional. Calculations of protium and tritium transfer in these reactions across a range of temperatures allowed calculation of the temperature dependence of kinetic isotope effects (KIE). Dynamics and quantum-tunneling effects are revealed to have little effect on the reaction rate, but are significant in determining the KIEs and their temperature dependence. A good agreement with experiments is found, especially when computed for RM1/MM simulations. The small temperature dependence of quantum tunneling corrections and the quasiclassical contribution term cancel each other, while the recrossing transmission coefficient seems to be temperature-independent over the interval of 5-40 °C.

  10. A photocurrent compensation method of bipolar transistors under high dose rate radiation and its experimental research

    International Nuclear Information System (INIS)

    Yin Xuesong; Liu Zhongli; Li Chunji; Yu Fang

    2005-01-01

    Experiment using discrete bipolar transistors has been performed to verify the effect of the photocurrent compensation method. The theory of the dose rate effects of bipolar transistors and the photocurrent compensation method are introduced. The comparison between the response of hardened and unhardened circuits under high dose rate radiation is discussed. The experimental results show instructiveness to the hardness of bipolar integrated circuits under transient radiation. (authors)

  11. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  12. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  13. Application of the personnel photographic monitoring method to determine equivalent radiation dose beyond proton accelerator shielding

    International Nuclear Information System (INIS)

    Gel'fand, E.K.; Komochkov, M.M.; Man'ko, B.V.; Salatskaya, M.I.; Sychev, B.S.

    1980-01-01

    Calculations of regularities to form radiation dose beyond proton accelerator shielding are carried out. Numerical data on photographic monitoring dosemeter in radiation fields investigated are obtained. It was shown how to determine the total equivalent dose of radiation fields beyond proton accelerator shielding by means of the photographic monitoring method by introduction into the procedure of considering nuclear emulsions of division of particle tracks into the black and grey ones. A comparison of experimental and calculational data has shown the applicability of the used calculation method for modelling dose radiation characteristics beyond proton accelerator shielding [ru

  14. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    International Nuclear Information System (INIS)

    Borges, Lucas R.; Oliveira, Helder C. R. de; Nunes, Polyana F.; Vieira, Marcelo A. C.; Bakic, Predrag R.; Maidment, Andrew D. A.

    2016-01-01

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe

  15. Method for simulating dose reduction in digital mammography using the Anscombe transformation

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Lucas R., E-mail: lucas.rodrigues.borges@usp.br; Oliveira, Helder C. R. de; Nunes, Polyana F.; Vieira, Marcelo A. C. [Department of Electrical and Computer Engineering, São Carlos School of Engineering, University of São Paulo, 400 Trabalhador São-Carlense Avenue, São Carlos 13566-590 (Brazil); Bakic, Predrag R.; Maidment, Andrew D. A. [Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, 3400 Spruce Street, Philadelphia, Pennsylvania 19104 (United States)

    2016-06-15

    Purpose: This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. Methods: The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. Results: The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. Conclusions: A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe

  16. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  18. Real-time dose compensation methods for scanned ion beam therapy of moving tumors

    International Nuclear Information System (INIS)

    Luechtenborg, Robert

    2012-01-01

    Scanned ion beam therapy provides highly tumor-conformal treatments. So far, only tumors showing no considerable motion during therapy have been treated as tumor motion and dynamic beam delivery interfere, causing dose deteriorations. One proposed technique to mitigate these deteriorations is beam tracking (BT), which adapts the beam position to the moving tumor. Despite application of BT, dose deviations can occur in the case of non-translational motion. In this work, real-time dose compensation combined with beam tracking (RDBT) has been implemented into the control system to compensate these dose changes by adaptation of nominal particle numbers during irradiation. Compared to BT, significantly reduced dose deviations were measured using RDBT. Treatment planning studies for lung cancer patients including the increased biological effectiveness of ions revealed a significantly reduced over-dose level (3/5 patients) as well as significantly improved dose homogeneity (4/5 patients) for RDBT. Based on these findings, real-time dose compensated re-scanning (RDRS) has been proposed that potentially supersedes the technically complex fast energy adaptation necessary for BT and RDBT. Significantly improved conformity compared to re-scanning, i.e., averaging of dose deviations by repeated irradiation, was measured in film irradiations. Simulations comparing RDRS to BT revealed reduced under- and overdoses of the former method.

  19. Digital radiography of scoliosis with a scanning method: radiation dose optimization

    Energy Technology Data Exchange (ETDEWEB)

    Geijer, Haakan; Andersson, Torbjoern [Department of Radiology, Oerebro University Hospital, 701 85 Oerebro (Sweden); Verdonck, Bert [Philips Medical Systems, P.O. Box 10,000, 5680 Best (Netherlands); Beckman, Karl-Wilhelm; Persliden, Jan [Department of Medical Physics, Oerebro University Hospital, 701 85 Oerebro (Sweden)

    2003-03-01

    The aim of this study was optimization of the radiation dose-image quality relationship for a digital scanning method of scoliosis radiography. The examination is performed as a digital multi-image translation scan that is reconstructed to a single image in a workstation. Entrance dose was recorded with thermoluminescent dosimeters placed dorsally on an Alderson phantom. At the same time, kerma area product (KAP) values were recorded. A Monte Carlo calculation of effective dose was also made. Image quality was evaluated with a contrast-detail phantom and Visual Grading. The radiation dose was reduced by lowering the image intensifier entrance dose request, adjusting pulse frequency and scan speed, and by raising tube voltage. The calculated effective dose was reduced from 0.15 to 0.05 mSv with reduction of KAP from 1.07 to 0.25 Gy cm{sup 2} and entrance dose from 0.90 to 0.21 mGy. The image quality was reduced with the Image Quality Figure going from 52 to 62 and a corresponding reduction in image quality as assessed with Visual Grading. The optimization resulted in a dose reduction to 31% of the original effective dose with an acceptable reduction in image quality considering the intended use of the images for angle measurements. (orig.)

  20. A simple method to back-project isocenter dose of radiotherapy treatments using EPID transit dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, T.B.; Cerbaro, B.Q.; Rosa, L.A.R. da, E-mail: thiago.fisimed@gmail.com, E-mail: tbsilveira@inca.gov.br [Instituto de Radioproteção e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro - RJ (Brazil)

    2017-07-01

    The aim of this work was to implement a simple algorithm to evaluate isocenter dose in a phantom using the back-projected transmitted dose acquired using an Electronic Portal Imaging Device (EPID) available in a Varian Trilogy accelerator with two nominal 6 and 10 MV photon beams. This algorithm was developed in MATLAB language, to calibrate EPID measured dose in absolute dose, using a deconvolution process, and to incorporate all scattering and attenuation contributions due to photon interactions with phantom. Modeling process was simplified by using empirical curve adjustments to describe the contribution of scattering and attenuation effects. The implemented algorithm and method were validated employing 19 patient treatment plans with 104 clinical irradiation fields projected on the phantom used. Results for EPID absolute dose calibration by deconvolution have showed percent deviations lower than 1%. Final method validation presented average percent deviations between isocenter doses calculated by back-projection and isocenter doses determined with ionization chamber of 1,86% (SD of 1,00%) and -0,94% (SD of 0,61%) for 6 and 10 MV, respectively. Normalized field by field analysis showed deviations smaller than 2% for 89% of all data for 6 MV beams and 94% for 10 MV beams. It was concluded that the proposed algorithm possesses sufficient accuracy to be used for in vivo dosimetry, being sensitive to detect dose delivery errors bigger than 3-4% for conformal and intensity modulated radiation therapy techniques. (author)

  1. Benchmarking sample preparation/digestion protocols reveals tube-gel being a fast and repeatable method for quantitative proteomics.

    Science.gov (United States)

    Muller, Leslie; Fornecker, Luc; Van Dorsselaer, Alain; Cianférani, Sarah; Carapito, Christine

    2016-12-01

    Sample preparation, typically by in-solution or in-gel approaches, has a strong influence on the accuracy and robustness of quantitative proteomics workflows. The major benefit of in-gel procedures is their compatibility with detergents (such as SDS) for protein solubilization. However, SDS-PAGE is a time-consuming approach. Tube-gel (TG) preparation circumvents this drawback as it involves directly trapping the sample in a polyacrylamide gel matrix without electrophoresis. We report here the first global label-free quantitative comparison between TG, stacking gel (SG), and basic liquid digestion (LD). A series of UPS1 standard mixtures (at 0.5, 1, 2.5, 5, 10, and 25 fmol) were spiked in a complex yeast lysate background. TG preparation allowed more yeast proteins to be identified than did the SG and LD approaches, with mean numbers of 1979, 1788, and 1323 proteins identified, respectively. Furthermore, the TG method proved equivalent to SG and superior to LD in terms of the repeatability of the subsequent experiments, with mean CV for yeast protein label-free quantifications of 7, 9, and 10%. Finally, known variant UPS1 proteins were successfully detected in the TG-prepared sample within a complex background with high sensitivity. All the data from this study are accessible on ProteomeXchange (PXD003841). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  3. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  4. Dose calculation methods in photon beam therapy using energy deposition kernels

    International Nuclear Information System (INIS)

    Ahnesjoe, A.

    1991-01-01

    The problem of calculating accurate dose distributions in treatment planning of megavoltage photon radiation therapy has been studied. New dose calculation algorithms using energy deposition kernels have been developed. The kernels describe the transfer of energy by secondary particles from a primary photon interaction site to its surroundings. Monte Carlo simulations of particle transport have been used for derivation of kernels for primary photon energies form 0.1 MeV to 50 MeV. The trade off between accuracy and calculational speed has been addressed by the development of two algorithms; one point oriented with low computional overhead for interactive use and one for fast and accurate calculation of dose distributions in a 3-dimensional lattice. The latter algorithm models secondary particle transport in heterogeneous tissue by scaling energy deposition kernels with the electron density of the tissue. The accuracy of the methods has been tested using full Monte Carlo simulations for different geometries, and found to be superior to conventional algorithms based on scaling of broad beam dose distributions. Methods have also been developed for characterization of clinical photon beams in entities appropriate for kernel based calculation models. By approximating the spectrum as laterally invariant, an effective spectrum and dose distribution for contaminating charge particles are derived form depth dose distributions measured in water, using analytical constraints. The spectrum is used to calculate kernels by superposition of monoenergetic kernels. The lateral energy fluence distribution is determined by deconvolving measured lateral dose distributions by a corresponding pencil beam kernel. Dose distributions for contaminating photons are described using two different methods, one for estimation of the dose outside of the collimated beam, and the other for calibration of output factors derived from kernel based dose calculations. (au)

  5. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  6. Determination of electron clinical spectra from percentage depth dose (PDD) curves by classical simulated annealing method

    International Nuclear Information System (INIS)

    Visbal, Jorge H. Wilches; Costa, Alessandro M.

    2016-01-01

    Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)

  7. Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.

    Science.gov (United States)

    Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F

    2015-05-01

    Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method

  8. SU-F-J-86: Method to Include Tissue Dose Response Effect in Deformable Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, J; Liang, J; Chen, S; Qin, A; Yan, D [Beaumont Health Systeml, Royal Oak, MI (United States)

    2016-06-15

    Purpose: Organ changes shape and size during radiation treatment due to both mechanical stress and radiation dose response. However, the dose response induced deformation has not been considered in conventional deformable image registration (DIR). A novel DIR approach is proposed to include both tissue elasticity and radiation dose induced organ deformation. Methods: Assuming that organ sub-volume shrinkage was proportional to the radiation dose induced cell killing/absorption, the dose induced organ volume change was simulated applying virtual temperature on each sub-volume. Hence, both stress and heterogeneity temperature induced organ deformation. Thermal stress finite element method with organ surface boundary condition was used to solve deformation. Initial boundary correspondence on organ surface was created from conventional DIR. Boundary condition was updated by an iterative optimization scheme to minimize elastic deformation energy. The registration was validated on a numerical phantom. Treatment dose was constructed applying both the conventional DIR and the proposed method using daily CBCT image obtained from HN treatment. Results: Phantom study showed 2.7% maximal discrepancy with respect to the actual displacement. Compared with conventional DIR, subvolume displacement difference in a right parotid had the mean±SD (Min, Max) to be 1.1±0.9(−0.4∼4.8), −0.1±0.9(−2.9∼2.4) and −0.1±0.9(−3.4∼1.9)mm in RL/PA/SI directions respectively. Mean parotid dose and V30 constructed including the dose response induced shrinkage were 6.3% and 12.0% higher than those from the conventional DIR. Conclusion: Heterogeneous dose distribution in normal organ causes non-uniform sub-volume shrinkage. Sub-volume in high dose region has a larger shrinkage than the one in low dose region, therefore causing more sub-volumes to move into the high dose area during the treatment course. This leads to an unfavorable dose-volume relationship for the normal organ

  9. Simplified calculation method for radiation dose under normal condition of transport

    International Nuclear Information System (INIS)

    Watabe, N.; Ozaki, S.; Sato, K.; Sugahara, A.

    1993-01-01

    In order to estimate radiation dose during transportation of radioactive materials, the following computer codes are available: RADTRAN, INTERTRAN, J-TRAN. Because these codes consist of functions for estimating doses not only under normal conditions but also in the case of accidents, when nuclei may leak and spread into the environment by air diffusion, the user needs to have special knowledge and experience. In this presentation, we describe how, with a view to preparing a method by which a person in charge of transportation can calculate doses in normal conditions, the main parameters upon which the value of doses depends were extracted and the dose for a unit of transportation was estimated. (J.P.N.)

  10. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    Science.gov (United States)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  11. From benchmarking HITS-CLIP peak detection programs to a new method for identification of miRNA-binding sites from Ago2-CLIP data.

    Science.gov (United States)

    Bottini, Silvia; Hamouda-Tekaya, Nedra; Tanasa, Bogdan; Zaragosi, Laure-Emmanuelle; Grandjean, Valerie; Repetto, Emanuela; Trabucchi, Michele

    2017-05-19

    Experimental evidence indicates that about 60% of miRNA-binding activity does not follow the canonical rule about the seed matching between miRNA and target mRNAs, but rather a non-canonical miRNA targeting activity outside the seed or with a seed-like motifs. Here, we propose a new unbiased method to identify canonical and non-canonical miRNA-binding sites from peaks identified by Ago2 Cross-Linked ImmunoPrecipitation associated to high-throughput sequencing (CLIP-seq). Since the quality of peaks is of pivotal importance for the final output of the proposed method, we provide a comprehensive benchmarking of four peak detection programs, namely CIMS, PIPE-CLIP, Piranha and Pyicoclip, on four publicly available Ago2-HITS-CLIP datasets and one unpublished in-house Ago2-dataset in stem cells. We measured the sensitivity, the specificity and the position accuracy toward miRNA binding sites identification, and the agreement with TargetScan. Secondly, we developed a new pipeline, called miRBShunter, to identify canonical and non-canonical miRNA-binding sites based on de novo motif identification from Ago2 peaks and prediction of miRNA::RNA heteroduplexes. miRBShunter was tested and experimentally validated on the in-house Ago2-dataset and on an Ago2-PAR-CLIP dataset in human stem cells. Overall, we provide guidelines to choose a suitable peak detection program and a new method for miRNA-target identification. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  13. SU-E-J-96: Multi-Axis Dose Accumulation of Noninvasive Image-Guided Breast Brachytherapy Through Biomechanical Modeling of Tissue Deformation Using the Finite Element Method

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, MJ [Tufts University School of Medicine, Boston, MA (United States); Ghadyani, HR [SUNY Farmingdale State College, Farmingdale, NY (United States); Bastien, AD; Lutz, NN [Univeristy Massachusetts Lowell, Lowell, MA (United States); Hepel, JT [Rhode Island Hospital, Providence, RI (United States)

    2015-06-15

    Purpose: Noninvasive image-guided breast brachytherapy delivers conformal HDR Ir-192 brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Methods: The model assumed the breast was under planar stress with values of 30 kPa for Young’s modulus and 0.3 for Poisson’s ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results: Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target-applicator combinations. Conclusions: The model exhibited skin dose trends that matched MC-generated benchmarking results and clinical measurements within 2% over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over

  14. SU-E-J-96: Multi-Axis Dose Accumulation of Noninvasive Image-Guided Breast Brachytherapy Through Biomechanical Modeling of Tissue Deformation Using the Finite Element Method

    International Nuclear Information System (INIS)

    Rivard, MJ; Ghadyani, HR; Bastien, AD; Lutz, NN; Hepel, JT

    2015-01-01

    Purpose: Noninvasive image-guided breast brachytherapy delivers conformal HDR Ir-192 brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Methods: The model assumed the breast was under planar stress with values of 30 kPa for Young’s modulus and 0.3 for Poisson’s ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results: Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target-applicator combinations. Conclusions: The model exhibited skin dose trends that matched MC-generated benchmarking results and clinical measurements within 2% over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over

  15. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    Science.gov (United States)

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  16. Critical dose threshold for TL dose response non-linearity: Dependence on the method of analysis: It’s not only the data

    International Nuclear Information System (INIS)

    Datz, H.; Horowitz, Y.S.; Oster, L.; Margaliot, M.

    2011-01-01

    It is demonstrated that the method of data analysis, i.e., the method of the phenomenological/theoretical interpretation of dose response data, can greatly influence the estimation of the onset of deviation from dose response linearity of the high temperature thermoluminescence in LiF:Mg,Ti (TLD-100).

  17. A method to acquire CT organ dose map using OSL dosimeters and ATOM anthropomorphic phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Da; Li, Xinhua; Liu, Bob [Division of Diagnostic Imaging Physics and Webster Center for Advanced Research and Education in Radiation, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Gao, Yiming; Xu, X. George [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)

    2013-08-15

    Purpose: To present the design and procedure of an experimental method for acquiring densely sampled organ dose map for CT applications, based on optically stimulated luminescence (OSL) dosimeters “nanoDots” and standard ATOM anthropomorphic phantoms; and to provide the results of applying the method—a dose data set with good statistics for the comparison with Monte Carlo simulation result in the future.Methods: A standard ATOM phantom has densely located holes (in 3 × 3 cm or 1.5 × 1.5 cm grids), which are too small (5 mm in diameter) to host many types of dosimeters, including the nanoDots. The authors modified the conventional way in which nanoDots are used, by removing the OSL disks from the holders before inserting them inside a standard ATOM phantom for dose measurements. The authors solved three technical difficulties introduced by this modification: (1) energy dependent dose calibration for raw OSL readings; (2) influence of the brief background exposure of OSL disks to dimmed room light; (3) correct pairing between the dose readings and measurement locations. The authors acquired 100 dose measurements at various positions in the phantom, which was scanned using a clinical chest protocol with both angular and z-axis tube current modulations.Results: Dose calibration was performed according to the beam qualities inside the phantom as determined from an established Monte Carlo model of the scanner. The influence of the brief exposure to dimmed room light was evaluated and deemed negligible. Pairing between the OSL readings and measurement locations was ensured by the experimental design. The organ doses measured for a routine adult chest scan protocol ranged from 9.4 to 18.8 mGy, depending on the composition, location, and surrounding anatomy of the organs. The dose distribution across different slices of the phantom strongly depended on the z-axis mA modulation. In the same slice, doses to the soft tissues other than the spinal cord demonstrated

  18. A method to acquire CT organ dose map using OSL dosimeters and ATOM anthropomorphic phantoms

    International Nuclear Information System (INIS)

    Zhang, Da; Li, Xinhua; Liu, Bob; Gao, Yiming; Xu, X. George

    2013-01-01

    Purpose: To present the design and procedure of an experimental method for acquiring densely sampled organ dose map for CT applications, based on optically stimulated luminescence (OSL) dosimeters “nanoDots” and standard ATOM anthropomorphic phantoms; and to provide the results of applying the method—a dose data set with good statistics for the comparison with Monte Carlo simulation result in the future.Methods: A standard ATOM phantom has densely located holes (in 3 × 3 cm or 1.5 × 1.5 cm grids), which are too small (5 mm in diameter) to host many types of dosimeters, including the nanoDots. The authors modified the conventional way in which nanoDots are used, by removing the OSL disks from the holders before inserting them inside a standard ATOM phantom for dose measurements. The authors solved three technical difficulties introduced by this modification: (1) energy dependent dose calibration for raw OSL readings; (2) influence of the brief background exposure of OSL disks to dimmed room light; (3) correct pairing between the dose readings and measurement locations. The authors acquired 100 dose measurements at various positions in the phantom, which was scanned using a clinical chest protocol with both angular and z-axis tube current modulations.Results: Dose calibration was performed according to the beam qualities inside the phantom as determined from an established Monte Carlo model of the scanner. The influence of the brief exposure to dimmed room light was evaluated and deemed negligible. Pairing between the OSL readings and measurement locations was ensured by the experimental design. The organ doses measured for a routine adult chest scan protocol ranged from 9.4 to 18.8 mGy, depending on the composition, location, and surrounding anatomy of the organs. The dose distribution across different slices of the phantom strongly depended on the z-axis mA modulation. In the same slice, doses to the soft tissues other than the spinal cord demonstrated

  19. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  20. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  1. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  2. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  3. Measurement of radiotherapy CBCT dose in a phantom using different methods

    International Nuclear Information System (INIS)

    Hu, Naonori; McLean, Donald

    2014-01-01

    Cone beam computed tomography (CBCT) is used widely for the precise and accurate patient set up needed during radiation therapy, notably for hypo fractionated treatments, such as intensity modulated radiation therapy and stereotactic radiation therapy. Reported doses associated with CBCT indicate the potential to approach radiation tolerance levels for some critical organs. However while some manufacturers state the CBCT dose for each standard protocol, currently there are no standard or recognised protocols for CBCT dosimetry. This study has applied wide beam computed tomography dosimetry approaches as reported by the International Atomic Energy Agency and the American Association of Physicists in Medicine to investigate dosimetry for the Varian Trilogy linear accelerator with on-board imager v1.5. Three detection methods were used including (i) the use of both 100 mm and 300 mm pencil ionisation chambers, (ii) a 0.6 cm 3 ionisation chamber and (iii) gafchromic film. Measurements were performed using custom built 45 cm long PMMA phantoms as well as standard 15 cm long phantoms for both head and body simulation. The results showed good agreement between each other detector system (within 3 %). The measured CBCT dose for the above methods showed a large difference to the dose stated by Varian, with the measured dose being 40 % over the stated dose for the standard head protocol. This shows the importance of independently verifying the stated dose given by the vendor for standard procedures.

  4. Dose comparison using deformed image registration method on breast cancer radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Won; Kim, Jung Hoon [Dept. of Radiation Oncology, KonYang University Hospital, Daejeon (Korea, Republic of); Won, Young Jin [Dept. of Radiation Oncology, InJe University Ilsan Paik Hospital, Goyang (Korea, Republic of)

    2017-03-15

    The purpose of this study is to reconstruct the treatment plan by applying CBCT and DIR to dose changes according to the change of the patient's motion and breast shape in the large breast cancer patients and to compare the doses using TWF, FIF and IMRT. CT and CBCT were performed with MIM6 to create DIRCT and each treatment plan was made. The patient underwent computed tomography simulation in both prone and supine position. The homogeneity index (HI), conformity index (CI), coverage index (CVI) to the left breast as planning target volume (PTV) were determined and the doses to the lung, heart, and right breast as organ at risk (OAR) were compared by using dose-volume histogram and the unique property of each organ. The value of HI of the PTV breast increased in all treatment planning methods using DIRCT, and CVI and CI were decreased in the treatment planning methods using DIRCT.

  5. ARN Training Course on Advance Methods for Internal Dose Assessment: Application of Ideas Guidelines

    International Nuclear Information System (INIS)

    Rojo, A.M.; Gomez Parada, I.; Puerta Yepes, N.; Gossio, S.

    2010-01-01

    Dose assessment in case of internal exposure involves the estimation of committed effective dose based on the interpretation of bioassay measurement, and the assumptions of hypotheses on the characteristics of the radioactive material and the time pattern and the pathway of intake. The IDEAS Guidelines provide a method to harmonize dose evaluations using criteria and flow chart procedures to be followed step by step. The EURADOS Working Group 7 'Internal Dosimetry', in collaboration with IAEA and Czech Technical University (CTU) in Prague, promoted the 'EURADOS/IAEA Regional Training Course on Advanced Methods for Internal Dose Assessment: Application of IDEAS Guidelines' to broaden and encourage the use of IDEAS Guidelines, which took place in Prague (Czech Republic) from 2-6 February 2009. The ARN identified the relevance of this training and asked for a place for participating on this activity. After that, the first training course in Argentina took place from 24-28 August for training local internal dosimetry experts. (authors)

  6. A simple method for conversion of airborne gamma-ray spectra to ground level doses

    DEFF Research Database (Denmark)

    Korsbech, Uffe C C; Bargholz, Kim

    1996-01-01

    A new and simple method for conversion of airborne NaI(Tl) gamma-ray spectra to dose rates at ground level has been developed. By weighting the channel count rates with the channel numbers a spectrum dose index (SDI) is calculated for each spectrum. Ground level dose rates then are determined...... by multiplying the SDI by an altitude dependent conversion factor. The conversion factors are determined from spectra based on Monte Carlo calculations. The results are compared with measurements in a laboratory calibration set-up. IT-NT-27. June 1996. 27 p....

  7. A novel method for measuring patients' adherence to insulin dosing guidelines: introducing indicators of adherence

    Directory of Open Access Journals (Sweden)

    Cahané Michel

    2008-12-01

    Full Text Available Abstract Background Diabetic type 1 patients are often advised to use dose adjustment guidelines to calculate their doses of insulin. Conventional methods of measuring patients' adherence are not applicable to these cases, because insulin doses are not determined in advance. We propose a method and a number of indicators to measure patients' conformance to these insulin dosing guidelines. Methods We used a database of logbooks of type 1 diabetic patients who participated in a summer camp. Patients used a guideline to calculate the doses of insulin lispro and glargine four times a day, and registered their injected doses in the database. We implemented the guideline in a computer system to calculate recommended doses. We then compared injected and recommended doses by using five indicators that we designed for this purpose: absolute agreement (AA: the two doses are the same; relative agreement (RA: there is a slight difference between them; extreme disagreement (ED: the administered and recommended doses are merely opposite; Under-treatment (UT and over-treatment (OT: the injected dose is not enough or too high, respectively. We used weighted linear regression model to study the evolution of these indicators over time. Results We analyzed 1656 insulin doses injected by 28 patients during a three weeks camp. Overall indicator rates were AA = 45%, RA = 30%, ED = 2%, UT = 26% and OT = 30%. The highest rate of absolute agreement is obtained for insulin glargine (AA = 70%. One patient with alarming behavior (AA = 29%, RA = 24% and ED = 8% was detected. The monitoring of these indicators over time revealed a crescendo curve of adherence rate which fitted well in a weighted linear model (slope = 0.85, significance = 0.002. This shows an improvement in the quality of therapeutic decision-making of patients during the camp. Conclusion Our method allowed the measurement of patients' adherence to their insulin adjustment guidelines. The indicators that we

  8. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  9. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  10. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  11. Jaws calibration method to get a homogeneous distribution of dose in the junction of hemi fields

    International Nuclear Information System (INIS)

    Cenizo de Castro, E.; Garcia Pareja, S.; Moreno Saiz, C.; Hernandez Rodriguez, R.; Bodineau Gil, C.; Martin-Viera Cueto, J. A.

    2011-01-01

    Hemi fields treatments are widely used in radiotherapy. Because the tolerance established for the positioning of each jaw is 1 mm, may be cases of overlap or separation of up to 2 mm. This implies heterogeneity of doses up to 40% in the joint area. This paper presents an accurate method of calibration of the jaws so as to obtain homogeneous dose distributions when using this type of treatment. (Author)

  12. Intracavitary after loading techniques, advantages and disadvantages with high and low dose-rate methods

    International Nuclear Information System (INIS)

    Walstam, Rune

    1980-01-01

    Even though suggested as early as 1903, it is only when suitable sealed gamma sources became available, afterloading methods could be developed for interstitial as well as intracavitary work. Manual afterloading technique can be used only for low dose rate irradiation, while remote controlled afterloading technique can be used for both low and high dose-rate irradiation. Afterloading units used at the Karolinska Institute, Stockholm, are described, and experience of their use is narrated briefly. (M.G.B.)

  13. A comparison of the calculation methods of the maze shielding dose

    International Nuclear Information System (INIS)

    Li Wenqian; Li Junli; Li Pengyu; Tao Yinghua

    2009-01-01

    This paper gives a theoretical calculating method for the dose rate of the maze of the low-energy accelerators or high-energy accelerators, based on the NCRP report Nos.49, 51 and 151. The multi-legged maze of the Miyun CT workshop of the NUCTECH Company Limited and the arc maze of the radiation laboratory of the Academy of Military Medical Sciences were calculated using this method. The calculating results were compared with the MCNP simulating results and the measured results. For the commonly estimation of the maze dose rate, as long as the parameters chosen properly, this method can give a conservative result, and save time from simulation. It's hoped that this work could offer a reference for the maze design and the dose estimation method in the aftertime. (authors)

  14. Neutron fluence-to-dose equivalent conversion factors: a comparison of data sets and interpolation methods

    International Nuclear Information System (INIS)

    Sims, C.S.; Killough, G.G.

    1983-01-01

    Various segments of the health physics community advocate the use of different sets of neutron fluence-to-dose equivalent conversion factors as a function of energy and different methods of interpolation between discrete points in those data sets. The major data sets and interpolation methods are used to calculate the spectrum average fluence-to-dose equivalent conversion factors for five spectra associated with the various shielded conditions of the Health Physics Research Reactor. The results obtained by use of the different data sets and interpolation methods are compared and discussed. (author)

  15. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.

    2002-01-01

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  16. Methodic of the gamma-rays absorbed dose measurements on tooth enamel

    International Nuclear Information System (INIS)

    Linev, S.V.; Muravskij, V.A.; Mashevskij, A.A.; Ugolev, I.I.

    1997-01-01

    The analysis of the metrological aspects of the tooth enamel ESR dosimetry has been done. The sample preparation and measurement methods have been elaborated. The methods have passed metrological certification. The methods include tabletting of the mixture of tooth enamel powder and MnO paramagnetic centres concentration additional standard, two loops of additional irradiation of samples by 1 Gy dose and ESR-spectra measurements, calculation of absorbed dose by maximum likelihood algorithm. The algorithm of dose calculation uses enamel spectrum model with axial anisotropic spin-Hamiltonian based on 126 spectra of enamel samples. The algorithm takes into account spectra of the empty cavity, the tube for a sample, the glue and MnO standard. Certificated ESR-station is based on the ESR-analyser PS-100X. ESR-station provides tooth enamel absorbed dose measurements from 0.05 to 0.25 Gy with error 35%, and from 0.25 to 3 Gy with error 20%. The set of tooth enamel absorbed dose standard samples has been created and certificated for the purposes of ESR-station testing and certification. The set consists of 12 tabletted samples of tooth enamel irradiated by doses from 0.05 to 4 Gy. (authors). 7 refs., 1 tab., 2 figs

  17. Optimization in radiotherapy treatment planning thanks to a fast dose calculation method

    International Nuclear Information System (INIS)

    Yang, Mingchao

    2014-01-01

    This thesis deals with the radiotherapy treatments planning issue which need a fast and reliable treatment planning system (TPS). The TPS is composed of a dose calculation algorithm and an optimization method. The objective is to design a plan to deliver the dose to the tumor while preserving the surrounding healthy and sensitive tissues. The treatment planning aims to determine the best suited radiation parameters for each patient's treatment. In this thesis, the parameters of treatment with IMRT (Intensity modulated radiation therapy) are the beam angle and the beam intensity. The objective function is multi-criteria with linear constraints. The main objective of this thesis is to demonstrate the feasibility of a treatment planning optimization method based on a fast dose-calculation technique developed by (Blanpain, 2009). This technique proposes to compute the dose by segmenting the patient's phantom into homogeneous meshes. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. A reformulation of this technique makes possible to solve the optimization problem by the gradient descent algorithm. The main advantage of this method is that the beam angle parameters could be optimized continuously in 3 dimensions. The obtained results in this thesis offer many opportunities in the field of radiotherapy treatment planning optimization. (author) [fr

  18. Technical Note: The impact of deformable image registration methods on dose warping.

    Science.gov (United States)

    Qin, An; Liang, Jian; Han, Xiao; O'Connell, Nicolette; Yan, Di

    2018-03-01

    The purpose of this study was to investigate the clinical-relevant discrepancy between doses warped by pure image based deformable image registration (IM-DIR) and by biomechanical model based DIR (BM-DIR) on intensity-homogeneous organs. Ten patients (5Head&Neck, 5Prostate) were included. A research DIR tool (ADMRIE_v1.12) was utilized for IM-DIR. After IM-DIR, BM-DIR was carried out for organs (parotids, bladder, and rectum) which often encompass sharp dose gradient. Briefly, high-quality tetrahedron meshes were generated and deformable vector fields (DVF) from IM-DIR were interpolated to the surface nodes of the volume meshes as boundary condition. Then, a FEM solver (ABAQUS_v6.14) was used to simulate the displacement of internal nodes, which were then interpolated to image-voxel grids to get the more physically plausible DVF. Both geometrical and subsequent dose warping discrepancies were quantified between the two DIR methods. Target registration discrepancy(TRD) was evaluated to show the geometry difference. The re-calculated doses on second CT were warped to the pre-treatment CT via two DIR. Clinical-relevant dose parameters and γ passing rate were compared between two types of warped dose. The correlation was evaluated between parotid shrinkage and TRD/dose discrepancy. The parotid shrunk to 75.7% ± 9% of its pre-treatment volume and the percentage of volume with TRD>1.5 mm) was 6.5% ± 4.7%. The normalized mean-dose difference (NMDD) of IM-DIR and BM-DIR was -0.8% ± 1.5%, with range (-4.7% to 1.5%). 2 mm/2% passing rate was 99.0% ± 1.4%. A moderate correlation was found between parotid shrinkage and TRD and NMDD. The bladder had a NMDD of -9.9% ± 9.7%, with BM-DIR warped dose systematically higher. Only minor deviation was observed for rectum NMDD (0.5% ± 1.1%). Impact of DIR method on treatment dose warping is patient and organ-specific. Generally, intensity-homogeneous organs, which undergo larger deformation/shrinkage during

  19. Measuring Protein Synthesis Rate In Living Object Using Flooding Dose And Constant Infusion Methods

    OpenAIRE

    Ulyarti, Ulyarti

    2018-01-01

    Constant infusion is a method used for measuring protein synthesis rate in living object which uses low concentration of amino acid tracers. Flooding dose method is another technique used to measure the rate of protein synthesis which uses labelled amino acid together with large amount of unlabelled amino acid.  The latter method was firstly developed to solve the problem in determination of precursor pool arise from constant infusion method.  The objective of this writing is to com...

  20. Application of accelerated evaluation method of alteration temperature and constant dose rate irradiation on bipolar linear regulator LM317

    International Nuclear Information System (INIS)

    Deng Wei; Wu Xue; Wang Xin; Zhang Jinxin; Zhang Xiaofu; Zheng Qiwen; Ma Wuying; Lu Wu; Guo Qi; He Chengfa

    2014-01-01

    With different irradiation methods including high dose rate irradiation, low dose rate irradiation, alteration temperature and constant dose rate irradiation, and US military standard constant high temperature and constant dose rate irradiation, the ionizing radiation responses of bipolar linear regulator LM317 from three different companies were investigated under the operating and zero biases. The results show that compared with constant high temperature and constant dose rate irradiation method, the alteration temperature and constant dose rate irradiation method can not only very rapidly and accurately evaluate the dose rate effect of three bipolar linear regulators, but also well simulate the damage of low dose rate irradiation. Experiment results make the alteration temperature and constant dose rate irradiation method successfully apply to bipolar linear regulator. (authors)

  1. Development of fluorescent, oscillometric and photometric methods to determine absorbed dose in irradiated fruits and nuts

    International Nuclear Information System (INIS)

    Kovacs, A.; Foeldiak, G.; Hargittai, P.; Miller, S.D.

    2001-01-01

    To ensure suitable quality control at food irradiation technologies and for quarantine authorities, simple routine dosimetry methods are needed for absorbed dose control. Taking into account the requirements at quarantine locations these methods would require nondestructive analysis for repeated measurements. Different dosimetry systems with different analytical evaluation methods have been tested and/or developed for absorbed dose measurements in the dose range of 0.1-10 kGy. In order to use the well accepted ethanolmonochlorobenzene dosimeter solution and the recently developed aqueous alanine solution in small volume sealed vials, a new portable, digital, and programmable oscillometric reader was developed. To make use of the availability of the very sensitive fluorimetric evaluation method, liquid and solid inorganic and organic dosimetry systems were developed for dose control using a new routine, portable, and computer controlled fluorimeter. Absorption or transmission photometric methods were also applied for dose measurements of solid or liquid phase dosimeter systems containing radiochromic dye agents, which change colour upon irradiation. (author)

  2. The continual reassessment method: comparison of Bayesian stopping rules for dose-ranging studies.

    Science.gov (United States)

    Zohar, S; Chevret, S

    2001-10-15

    The continual reassessment method (CRM) provides a Bayesian estimation of the maximum tolerated dose (MTD) in phase I clinical trials and is also used to estimate the minimal efficacy dose (MED) in phase II clinical trials. In this paper we propose Bayesian stopping rules for the CRM, based on either posterior or predictive probability distributions that can be applied sequentially during the trial. These rules aim at early detection of either the mis-choice of dose range or a prefixed gain in the point estimate or accuracy of estimated probability of response associated with the MTD (or MED). They were compared through a simulation study under six situations that could represent the underlying unknown dose-response (either toxicity or failure) relationship, in terms of sample size, probability of correct selection and bias of the response probability associated to the MTD (or MED). Our results show that the stopping rules act correctly, with early stopping by using the two first rules based on the posterior distribution when the actual underlying dose-response relationship is far from that initially supposed, while the rules based on predictive gain functions provide a discontinuation of inclusions whatever the actual dose-response curve after 20 patients on average, that is, depending mostly on the accumulated data. The stopping rules were then applied to a data set from a dose-ranging phase II clinical trial aiming at estimating the MED dose of midazolam in the sedation of infants during cardiac catheterization. All these findings suggest the early use of the two first rules to detect a mis-choice of dose range, while they confirm the requirement of including at least 20 patients at the same dose to reach an accurate estimate of MTD (MED). A two-stage design is under study. Copyright 2001 John Wiley & Sons, Ltd.

  3. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    International Nuclear Information System (INIS)

    Almen, A.J.

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs

  4. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    Energy Technology Data Exchange (ETDEWEB)

    Almen, A J

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs.

  5. Application of the dose rate spectroscopy to the dose-to-curie conversion method using a NaI(Tl) detector

    International Nuclear Information System (INIS)

    JI, Young-Yong; Chung, Kun Ho; Kim, Chang-Jong; Kang, Mun Ja; Park, Sang Tae

    2015-01-01

    Dose rate spectroscopy is a very useful method to directly calculate the individual dose rate from the converted energy spectrum for the dose rate using the G-factor which is related to the used detector response function. A DTC conversion method for the estimation of the radioactivity based on the measured dose rate from the radioactive materials can then be modified into a simple equation using the dose rate spectroscopy. In order to make the method validation of the modified DTC conversion method, experimental verifications using a 3″φx3″ NaI(Tl) detector were conducted at the simple geometry of the point source located onto a detector and more complex geometries which mean the assay of the simulated radioactive material. In addition, the linearity about the results from the modified DTC conversion method was also estimated by increasing the distance between source positions and a detector to confirm the method validation in the energy, dose rate, and distance range of the gamma nuclides. - Highlights: • A modified DTC conversion method using the dose rate spectroscopy was established. • In-situ calibration factors were calculated from the MCNP simulation. • Radioactivities of the disk sources were accurately calculated using a modified DTC conversion method. • A modified DTC conversion method was applied to the assay of the radioactive material

  6. Benchmarking Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jakic, I.

    2016-01-01

    One of the main tasks an owner have is to keep its business competitive on the market while delivering its product. Being owner of nuclear power plant bear the same (or even more complex and stern) responsibility due to safety risks and costs. In the past, nuclear power plant managements could (partly) ignore profit or it was simply expected and to some degree assured through the various regulatory processes governing electricity rate design. It is obvious now that, with the deregulation, utility privatization and competitive electricity market, key measure of success used at nuclear power plants must include traditional metrics of successful business (return on investment, earnings and revenue generation) as well as those of plant performance, safety and reliability. In order to analyze business performance of (specific) nuclear power plant, benchmarking, as one of the well-established concept and usual method was used. Domain was conservatively designed, with well-adjusted framework, but results have still limited application due to many differences, gaps and uncertainties. (author).

  7. Development of a method to estimate organ doses for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Papadakis, Antonios E., E-mail: apapadak@pagni.gr; Perisinakis, Kostas; Damilakis, John [Department of Medical Physics, University Hospital of Heraklion, Faculty of Medicine, University of Crete, P.O. Box 1352, Iraklion, Crete 71110 (Greece)

    2016-05-15

    Purpose: To develop a method for estimating doses to primarily exposed organs in pediatric CT by taking into account patient size and automatic tube current modulation (ATCM). Methods: A Monte Carlo CT dosimetry software package, which creates patient-specific voxelized phantoms, accurately simulates CT exposures, and generates dose images depicting the energy imparted on the exposed volume, was used. Routine head, thorax, and abdomen/pelvis CT examinations in 92 pediatric patients, ranging from 1-month to 14-yr-old (49 boys and 43 girls), were simulated on a 64-slice CT scanner. Two sets of simulations were performed in each patient using (i) a fixed tube current (FTC) value over the entire examination length and (ii) the ATCM profile extracted from the DICOM header of the reconstructed images. Normalized to CTDI{sub vol} organ dose was derived for all primary irradiated radiosensitive organs. Normalized dose data were correlated to patient’s water equivalent diameter using log-transformed linear regression analysis. Results: The maximum percent difference in normalized organ dose between FTC and ATCM acquisitions was 10% for eyes in head, 26% for thymus in thorax, and 76% for kidneys in abdomen/pelvis. In most of the organs, the correlation between dose and water equivalent diameter was significantly improved in ATCM compared to FTC acquisitions (P < 0.001). Conclusions: The proposed method employs size specific CTDI{sub vol}-normalized organ dose coefficients for ATCM-activated and FTC acquisitions in pediatric CT. These coefficients are substantially different between ATCM and FTC modes of operation and enable a more accurate assessment of patient-specific organ dose in the clinical setting.

  8. Fluoxetine Dose and Administration Method Differentially Affect Hippocampal Plasticity in Adult Female Rats

    Science.gov (United States)

    Pawluski, Jodi L.; van Donkelaar, Eva; Abrams, Zipporah; Steinbusch, Harry W. M.; Charlier, Thierry D.

    2014-01-01

    Selective serotonin reuptake inhibitor medications are one of the most common treatments for mood disorders. In humans, these medications are taken orally, usually once per day. Unfortunately, administration of antidepressant medications in rodent models is often through injection, oral gavage, or minipump implant, all relatively stressful procedures. The aim of the present study was to investigate how administration of the commonly used SSRI, fluoxetine, via a wafer cookie, compares to fluoxetine administration using an osmotic minipump, with regards to serum drug levels and hippocampal plasticity. For this experiment, adult female Sprague-Dawley rats were divided over the two administration methods: (1) cookie and (2) osmotic minipump and three fluoxetine treatment doses: 0, 5, or 10 mg/kg/day. Results show that a fluoxetine dose of 5 mg/kg/day, but not 10 mg/kg/day, results in comparable serum levels of fluoxetine and its active metabolite norfluoxetine between the two administration methods. Furthermore, minipump administration of fluoxetine resulted in higher levels of cell proliferation in the granule cell layer (GCL) at a 5 mg dose compared to a 10 mg dose. Synaptophysin expression in the GCL, but not CA3, was significantly lower after fluoxetine treatment, regardless of administration method. These data suggest that the administration method and dose of fluoxetine can differentially affect hippocampal plasticity in the adult female rat. PMID:24757568

  9. Fluoxetine Dose and Administration Method Differentially Affect Hippocampal Plasticity in Adult Female Rats

    Directory of Open Access Journals (Sweden)

    Jodi L. Pawluski

    2014-01-01

    Full Text Available Selective serotonin reuptake inhibitor medications are one of the most common treatments for mood disorders. In humans, these medications are taken orally, usually once per day. Unfortunately, administration of antidepressant medications in rodent models is often through injection, oral gavage, or minipump implant, all relatively stressful procedures. The aim of the present study was to investigate how administration of the commonly used SSRI, fluoxetine, via a wafer cookie, compares to fluoxetine administration using an osmotic minipump, with regards to serum drug levels and hippocampal plasticity. For this experiment, adult female Sprague-Dawley rats were divided over the two administration methods: (1 cookie and (2 osmotic minipump and three fluoxetine treatment doses: 0, 5, or 10 mg/kg/day. Results show that a fluoxetine dose of 5 mg/kg/day, but not 10 mg/kg/day, results in comparable serum levels of fluoxetine and its active metabolite norfluoxetine between the two administration methods. Furthermore, minipump administration of fluoxetine resulted in higher levels of cell proliferation in the granule cell layer (GCL at a 5 mg dose compared to a 10 mg dose. Synaptophysin expression in the GCL, but not CA3, was significantly lower after fluoxetine treatment, regardless of administration method. These data suggest that the administration method and dose of fluoxetine can differentially affect hippocampal plasticity in the adult female rat.

  10. Multiple methods for assessing the dose to skin exposed to radioactive contamination

    International Nuclear Information System (INIS)

    Dubeau, J.; Heinmiller, B.E.; Corrigan, M.

    2017-01-01

    There is the possibility for a worker at a nuclear installation, such as a nuclear power reactor, a fuel production facility or a medical facility, to come in contact with radioactive contaminants. When such an event occurs, the first order of business is to care for the worker by promptly initiating a decontamination process. Usually, the radiation protection personnel performs a G-M pancake probe measurement of the contamination in situ and collects part or all of the radioactive contamination for further laboratory analysis. The health physicist on duty must then perform, using the available information, a skin dose assessment that will go into the worker's permanent dose record. The contamination situations are often complex and the dose assessment can be laborious. This article compares five dose assessment methods that involve analysis, new technologies and new software. The five methods are applied to 13 actual contamination incidents consisting of direct skin contact, contamination on clothing and contamination on clothing in the presence of an air gap between the clothing and the skin. This work shows that, for the cases studied, the methods provided dose estimates that were usually within 12% (1σ) of each other, for those cases where absolute activity information for every radionuclide was available. One method, which relies simply on a G-M pancake probe measurement, appeared to be particularly useful in situations where a contamination sample could not be recovered for laboratory analysis. (authors)

  11. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  12. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  13. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  14. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  15. Comparison between evaluating methods about the protocols of different dose distributions in radiotherapy

    International Nuclear Information System (INIS)

    Ju Yongjian; Chen Meihua; Sun Fuyin; Zhang Liang'an; Lei Chengzhi

    2004-01-01

    Objective: To study the relationship between tumor control probability (TCP) or equivalent uniform dose (EUD) and the heterogeneity degree of the dose changes with variable biological parameter values of the tumor. Methods: According to the definitions of TCP and EUD, calculating equations were derived. The dose distributions in the tumor were assumed to be Gaussian ones. The volume of the tumor was divided into several voxels, and the absorbed doses of these voxels were simulated by Monte Carlo methods. Then with the different values of radiosensitivity (α) and potential doubling time of the clonogens (T p ), the relationships between TCP or EUD and the standard deviation of dose (S d ) were evaluated. Results: The TCP-S d curves were influenced by the variable α and T p values, but the EUD-S d curves showed little variation. Conclusion: When the radiotherapy protocols with different dose distributions are compared, if the biological parameter values of the tumor have been known exactly, it's better to use the TCP, otherwise the EUD will be preferred

  16. Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy; Benchmark-Experiment zur Verifikation von Strahlungstransportrechnungen fuer die Dosimetrie in der Strahlentherapie

    Energy Technology Data Exchange (ETDEWEB)

    Renner, Franziska [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany)

    2016-11-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.

  17. Radioactivity in food and the environment: calculations of UK radiation doses using integrated methods

    International Nuclear Information System (INIS)

    Allott, Rob

    2003-01-01

    Dear Sir: I read with interest the paper by W C Camplin, G P Brownless, G D Round, K Winpenny and G J Hunt from the Centre for Environment, Fisheries and Aquaculture Science (CEFAS) on 'Radioactivity in food and the environment: calculations of UK radiation doses using integrated methods' in the December 2002 issue of this journal (J. Radiol. Prot. 22 371-88). The Environment Agency has a keen interest in the development of a robust methodology for assessing total doses which have been received by members of the public from authorised discharges of radioactive substances to the environment. Total dose in this context means the dose received from all authorised discharges and all exposure pathways (e.g. inhalation, external irradiation from radionuclides in sediment/soil, direct radiation from operations on a nuclear site, consumption of food etc). I chair a 'total retrospective dose assessment' working group with representatives from the Scottish Environment Protection Agency (SEPA), Food Standards Agency (FSA), National Radiological Protection Board, CEFAS and BNFL which began discussing precisely this issue during 2002. This group is a sub-group of the National Dose Assessment Working Group which was set up in April 2002 (J. Radiol. Prot. 22 318-9). The Environment Agency, Food Standards Agency and the Nuclear Installations Inspectorate previously undertook joint research into the most appropriate methodology to use for total dose assessment (J J Hancox, S J Stansby and M C Thorne 2002 The Development of a Methodology to Assess Population Doses from Multiple Source and Exposure Pathways of Radioactivity (Environment Agency R and D Technical Report P3-070/TR). This work came to broadly the same conclusion as the work by CEFAS, that an individual dose method is probably the most appropriate method to use. This research and that undertaken by CEFAS will help the total retrospective dose assessment working group refine a set of principles and a methodology for the

  18. A simple method for estimating the effective dose in dental CT. Conversion factors and calculation for a clinical low-dose protocol

    International Nuclear Information System (INIS)

    Homolka, P.; Kudler, H.; Nowotny, R.; Gahleitner, A.; Wien Univ.

    2001-01-01

    An easily appliable method to estimate effective dose including in its definition the high radio-sensitivity of the salivary glands from dental computed tomography is presented. Effective doses were calculated for a markedly dose reduced dental CT protocol as well as for standard settings. Data are compared with effective doses from the literature obtained with other modalities frequently used in dental care. Methods: Conversion factors based on the weighted Computed Tomography Dose Index were derived from published data to calculate effective dose values for various CT exposure settings. Results: Conversion factors determined can be used for clinically used kVp settings and prefiltrations. With reduced tube current an effective dose for a CT examination of the maxilla of 22 μSv can be achieved, which compares to values typically obtained with panoramic radiography (26 μSv). A CT scan of the mandible, respectively, gives 123 μSv comparable to a full mouth survey with intraoral films (150 μSv). Conclusion: For standard CT scan protocols of the mandible, effective doses exceed 600 μSv. Hence, low dose protocols for dental CT should be considered whenever feasable, especially for paediatric patients. If hard tissue diagnoses is performed, the potential of dose reduction is significant despite the higher image noise levels as readability is still adequate. (orig.) [de

  19. Development of a method to calculate organ doses for the upper gastrointestinal fluoroscopic examination

    International Nuclear Information System (INIS)

    Suleiman, O.H.

    1989-01-01

    A method was developed to quantitatively measure the upper gastrointestinal fluoroscopic examination in order to calculate organ doses. The dynamic examination was approximated with a set of discrete x-ray fields. Once the examination was segmented into discrete x-ray fields appropriate organ dose tables were generated using an existing computer program for organ dose calculations. This, along with knowledge of the radiation exposures associated with each of the fields, enabled the calculation of organ doses for the entire dynamic examination. The protocol involves videotaping the examination while fluoroscopic technique factors, tube current and tube potential, are simultaneously recorded on the audio tracks of the videotape. Subsequent analysis allows the dynamic examination to be segmented into a series of discrete x-ray fields uniquely defined by field size, projection, and anatomical region. The anatomical regions associated with the upper gastrointestinal examination were observed to be the upper, middle, and lower esophagus, the gastroesophageal junction, the stomach, and the duodenum

  20. Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities

    International Nuclear Information System (INIS)

    Costa, Alessandro Martins da

    1999-01-01

    Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)

  1. Methods to verify absorbed dose of irradiated containers and evaluation of dosimeters

    International Nuclear Information System (INIS)

    Gao Meixu; Wang Chuanyao; Tang Zhangxong; Li Shurong

    2001-01-01

    The research on dose distribution in irradiated food containers and evaluation of several methods to verify absorbed dose were carried out. The minimum absorbed dose of treated five orange containers was in the top of the highest or in the bottom of lowest container. D max /D min in this study was 1.45 irradiated in a commercial 60 Co facility. The density of orange containers was about 0.391g/cm 3 . The evaluation of dosimeters showed that the PMMA-YL and clear PMMA dosimeters have linear relationship with dose response, and the word NOT in STERIN-125 and STERIN-300 indicators were covered completely at the dosage of 125 and 300 Gy respectively. (author)

  2. Calculational methods for estimating skin dose from electrons in Co-60 gamma-ray beams

    International Nuclear Information System (INIS)

    Higgins, P.D.; Sibata, C.H.; Attix, F.H.; Paliwal, B.R.

    1983-01-01

    Several methods have been employed to calculate the relative contribution to skin dose due to scattered electrons in Co-60 gamma-ray beams. Either the Klein-Nishina differential scattering probability is employed to determine the number and initial energy of electrons scattered into the direction of a detector, or a Gaussian approximation is used to specify the surface distribution of initial pencil electron beams created by parallel or diverging photon fields. Results of these calculations are compared with experimental data. In addition, that fraction of relative surface dose resulting from photon interactions in air alone is estimated and compared with data extrapolated from measurements at large source-surface distance (SSD). The contribution to surface dose from electrons generated in air is 50% or more of the total skin dose for SSDs greater than 80 cm

  3. Calculational methods for estimating skin dose from electrons in Co-60 gamma-ray beams

    International Nuclear Information System (INIS)

    Higgins, P.D.; Sibata, C.H.; Attix, F.H.; Paliwal, B.R.

    1983-01-01

    Several methods have been employed to calculate the relative contribution to skin dose due to scattered electrons in Co-60 γ-ray beams. Either the Klein--Nishina differential scattering probability is employed to determine the number and initial energy of electrons scattered into the direction of a detector, or a Gaussian approximation is used to specify the surface distribution of initial pencil electron beams created by parallel or diverging photon fields. Results of these calculations are compared with experimental data. In addition, that fraction of relative surface dose resulting from photon interactions in air alone is estimated and compared with data extrapolated from measurements at large source--surface distance (SSD). The contribution to surface dose from electrons generated in air is 50% or more of the total skin dose for SSDs greater than 80 cm

  4. Evaluation of Patient Radiation Dose during Cardiac Interventional Procedures: What Is the Most Effective Method?

    International Nuclear Information System (INIS)

    Chida, K.; Saito, H.; Ishibashi, T.; Zuguchi, M.; Kagaya, Y.; Takahashi, S.

    2009-01-01

    Cardiac interventional radiology has lower risks than surgical procedures. This is despite the fact that radiation doses from cardiac intervention procedures are the highest of any commonly performed general X-ray examination. Maximum radiation skin doses (MSDs) should be determined to avoid radiation-associated skin injuries in patients undergoing cardiac intervention procedures. However, real-time evaluation of MSD is unavailable for many cardiac intervention procedures. This review describes methods of determining MSD during cardiac intervention procedures. Currently, in most cardiac intervention procedures, real-time measuring of MSD is not feasible. Thus, we recommend that physicians record the patient's total entrance skin dose, such as the dose at the interventional reference point when it can be monitored, in order to estimate MSD in intervention procedures

  5. Repeated dose titration versus age-based method in electroconvulsive therapy: a pilot study.

    Science.gov (United States)

    Aten, Jan Jaap; Oudega, Mardien; van Exel, Eric; Stek, Max L; van Waarde, Jeroen A

    2015-06-01

    In electroconvulsive therapy (ECT), a dose titration method (DTM) was suggested to be more individualized and therefore more accurate than formula-based dosing methods. A repeated DTM (every sixth session and dose adjustment accordingly) was compared to an age-based method (ABM) regarding treatment characteristics, clinical outcome, and cognitive functioning after ECT. Thirty-nine unipolar depressed patients dosed using repeated DTM and 40 matched patients treated with ABM were compared. Montgomery-Åsberg Depression Rating Scale (MADRS) and Mini-Mental State Examination (MMSE) were assessed at baseline and at the end of the index course, as well as the total number of ECT sessions. Both groups were similar regarding age, sex, psychotic features, mean baseline MADRS, and median baseline MMSE. At the end of the index course, the two methods showed equal outcome (mean end MADRS, 11.6 ± 8.3 in DTM and 9.5 ± 7.6 in ABM (P = 0.26); median end MMSE, 28 (25-29) and 28 (25-29.8), respectively (P = 0.81). However, the median number of all ECT sessions differed 16 (11-22) in DTM versus 12 (10-14.8) in ABM; P = 0.02]. Using regression analysis, dosing method and age were independently associated with the total number of ECT sessions, with less sessions needed in ABM (P = 0.02) and in older patients (P = 0.001). In this comparative cohort study, ABM and DTM showed equal outcome for depression and cognition. However, the median ECT course duration in repeated DTM appeared longer. Additionally, higher age was associated with shorter ECT courses regardless of the dosing method. Further prospective studies are needed to confirm these findings.

  6. Rapid radiological characterization method based on the use of dose coefficients

    International Nuclear Information System (INIS)

    Dulama, C.; Toma, Al.; Dobrin, R.; Valeca, M.

    2010-01-01

    Intervention actions in case of radiological emergencies and exploratory radiological surveys require rapid methods for the evaluation of the range and extent of contamination. When simple and homogeneous radionuclide composition characterize the radioactive contamination, surrogate measurements can be used to reduce the costs implied by laboratory analyses and to speed-up the process of decision support. A dose-rate measurement-based methodology can be used in conjunction with adequate dose coefficients to assess radionuclide inventories and to calculate dose projections for various intervention scenarios. The paper presents the results obtained for dose coefficients in some particular exposure geometries and the methodology used for deriving dose rate guidelines from activity concentration upper levels specified as contamination limits. All calculations were performed by using the commercial software MicroShield from Grove Software Inc. A test case was selected as to meet the conditions from EPA Federal Guidance Report no. 12 (FGR12) concerning the evaluation of dose coefficients for external exposure from contaminated soil and the obtained results were compared to values given in the referred document. The geometries considered as test cases are: contaminated ground surface; - infinite extended homogeneous surface contamination and soil contaminated to a depth of 15 cm. As shown by the results, the values agree within 50% relative difference for most of the cases. The greatest discrepancies were observed for depth contamination simulation and in the case of radionuclides with complicated gamma emission and this is due to the different approach from MicroShield and FGR12. A case study is presented for validation of the methodology, where both dose rate measurements and laboratory analyses were performed on an extended quasi-homogeneous NORM contamination. The dose rate estimations obtained by applying the dose coefficients to the radionuclide concentrations

  7. Environmental dose rate assessment of ITER using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Karimian Alireza

    2014-01-01

    Full Text Available Exposure to radiation is one of the main sources of risk to staff employed in reactor facilities. The staff of a tokamak is exposed to a wide range of neutrons and photons around the tokamak hall. The International Thermonuclear Experimental Reactor (ITER is a nuclear fusion engineering project and the most advanced experimental tokamak in the world. From the radiobiological point of view, ITER dose rates assessment is particularly important. The aim of this study is the assessment of the amount of radiation in ITER during its normal operation in a radial direction from the plasma chamber to the tokamak hall. To achieve this goal, the ITER system and its components were simulated by the Monte Carlo method using the MCNPX 2.6.0 code. Furthermore, the equivalent dose rates of some radiosensitive organs of the human body were calculated by using the medical internal radiation dose phantom. Our study is based on the deuterium-tritium plasma burning by 14.1 MeV neutron production and also photon radiation due to neutron activation. As our results show, the total equivalent dose rate on the outside of the bioshield wall of the tokamak hall is about 1 mSv per year, which is less than the annual occupational dose rate limit during the normal operation of ITER. Also, equivalent dose rates of radiosensitive organs have shown that the maximum dose rate belongs to the kidney. The data may help calculate how long the staff can stay in such an environment, before the equivalent dose rates reach the whole-body dose limits.

  8. Methods for estimation of internal dose of the public from dietary

    International Nuclear Information System (INIS)

    Zhu Hongda

    1987-01-01

    Following the issue of its Publication 26, ICRP has successively published its Publication 30 to meet the great changes and improvements made in the Basic Recommendations since July of 1979. In Part 1 of Publcation 30, ICRP recommended a new method for internal dose estimation and pressented some important data. In this report, comparison is made among methods for estimation of internal dose for the public from dietary. They include: (1) the new method suggested by ICRP; (2) the simple and convenient method using transfer factors under equilibrium conditions; (3) the methods based on the similarities of several radionuclides to their chemical analogs. It is concluded that the first method is better than the others and should be used from now on

  9. A direct method for estimating the alpha/beta ratio from quantitative dose-response data

    International Nuclear Information System (INIS)

    Stuschke, M.

    1989-01-01

    A one-step optimization method based on a least squares fit of the linear quadratic model to quantitative tissue response data after fractionated irradiation is proposed. Suitable end-points that can be analysed by this method are growth delay, host survival and quantitative biochemical or clinical laboratory data. The functional dependence between the transformed dose and the measured response is approximated by a polynomial. The method allows for the estimation of the alpha/beta ratio and its confidence limits from all observed responses of the different fractionation schedules. Censored data can be included in the analysis. A method to test the appropriateness of the fit is presented. A computer simulation illustrates the method and its accuracy as examplified by the growth delay end point. A comparison with a fit of the linear quadratic model to interpolated isoeffect doses shows the advantages of the direct method. (orig./HP) [de

  10. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  11. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  12. Benchmark assessment of density functional methods on group II-VI MX (M = Zn, Cd; X = S, Se, Te) quantum dots

    NARCIS (Netherlands)

    Azpiroz, Jon M.; Ugalde, Jesus M.; Infante, Ivan

    2014-01-01

    In this work, we build a benchmark data set of geometrical parameters, vibrational normal modes, and low-lying excitation energies for MX quantum dots, with M = Cd, Zn, and X = S, Se, Te. The reference database has been constructed by ab initio resolution-of-identity second-order approximate coupled

  13. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  14. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Tao, Yinghua [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics and Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Hacker, Timothy A.; Raval, Amish N. [Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Van Lysel, Michael S.; Speidel, Michael A., E-mail: speidel@wisc.edu [Department of Medical Physics and Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)

    2014-07-15

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  15. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    International Nuclear Information System (INIS)

    Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.

    2014-01-01

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  16. A method to evaluate the dose increase in CT with iodinated contrast medium

    International Nuclear Information System (INIS)

    Amato, Ernesto; Lizio, Domenico; Settineri, Nicola; Di Pasquale, Andrea; Salamone, Ignazio; Pandolfo, Ignazio

    2010-01-01

    Purpose: The objective of this study is to develop a method to calculate the relative dose increase when a computerized tomography scan (CT) is carried out after administration of iodinated contrast medium, with respect to the same CT scan in absence of contrast medium. Methods: A Monte Carlo simulation in GEANT4 of anthropomorphic neck and abdomen phantoms exposed to a simplified model of CT scanner was set up in order to calculate the increase of dose to thyroid, liver, spleen, kidneys, and pancreas as a function of the quantity of iodine accumulated; a series of experimental measurements of Hounsfield unit (HU) increment for known concentrations of iodinated contrast medium was carried out on a Siemens Sensation 16 CT scanner in order to obtain a relationship between the increment in HU and the relative dose increase in the organs studied. The authors applied such a method to calculate the average dose increase in three patients who underwent standard CT protocols consisting of one native scan in absence of contrast, followed by a contrast-enhanced scan in venous phase. Results: The authors validated their GEANT4 Monte Carlo simulation by comparing the resulting dose increases for iodine solutions in water with the ones presented in literature and with their experimental data obtained through a Roentgen therapy unit. The relative dose increases as a function of the iodine mass fraction accumulated and as a function of the Hounsfield unit increment between the contrast-enhanced scan and the native scan are presented. The data shown for the three patients exhibit an average relative dose increase between 22% for liver and 74% for kidneys; also, spleen (34%), pancreas (28%), and thyroid (48%) show a remarkable average increase. Conclusions: The method developed allows a simple evaluation of the dose increase when iodinated contrast medium is used in CT scans, basing on the increment in Hounsfield units observed on the patients' organs. Since many clinical protocols

  17. A method to evaluate the dose increase in CT with iodinated contrast medium

    Energy Technology Data Exchange (ETDEWEB)

    Amato, Ernesto; Lizio, Domenico; Settineri, Nicola; Di Pasquale, Andrea; Salamone, Ignazio; Pandolfo, Ignazio [Department of Radiological Sciences, University of Messina, Messina 98125 (Italy); Department of Physics, University of Messina, Messina 98166 (Italy); University Hospital ' ' G. Martino' ' , Messina 98125 (Italy); Department of Radiological Sciences, University of Messina, Messina 98125 (Italy) and University Hospital ' ' G. Martino' ' , Messina 98125 (Italy)

    2010-08-15

    Purpose: The objective of this study is to develop a method to calculate the relative dose increase when a computerized tomography scan (CT) is carried out after administration of iodinated contrast medium, with respect to the same CT scan in absence of contrast medium. Methods: A Monte Carlo simulation in GEANT4 of anthropomorphic neck and abdomen phantoms exposed to a simplified model of CT scanner was set up in order to calculate the increase of dose to thyroid, liver, spleen, kidneys, and pancreas as a function of the quantity of iodine accumulated; a series of experimental measurements of Hounsfield unit (HU) increment for known concentrations of iodinated contrast medium was carried out on a Siemens Sensation 16 CT scanner in order to obtain a relationship between the increment in HU and the relative dose increase in the organs studied. The authors applied such a method to calculate the average dose increase in three patients who underwent standard CT protocols consisting of one native scan in absence of contrast, followed by a contrast-enhanced scan in venous phase. Results: The authors validated their GEANT4 Monte Carlo simulation by comparing the resulting dose increases for iodine solutions in water with the ones presented in literature and with their experimental data obtained through a Roentgen therapy unit. The relative dose increases as a function of the iodine mass fraction accumulated and as a function of the Hounsfield unit increment between the contrast-enhanced scan and the native scan are presented. The data shown for the three patients exhibit an average relative dose increase between 22% for liver and 74% for kidneys; also, spleen (34%), pancreas (28%), and thyroid (48%) show a remarkable average increase. Conclusions: The method developed allows a simple evaluation of the dose increase when iodinated contrast medium is used in CT scans, basing on the increment in Hounsfield units observed on the patients' organs. Since many clinical

  18. Virtual reality based adaptive dose assessment method for arbitrary geometries in nuclear facility decommissioning.

    Science.gov (United States)

    Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun

    2018-05-17

    This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.

  19. Practical dose point-based methods to characterize dose distribution in a stationary elliptical body phantom for a cone-beam C-arm CT system

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Mechanical Engineering, Stanford University, Stanford, California 94305 (United States); Constantin, Dragos [Microwave Physics R& E, Varian Medical Systems, Palo Alto, California 94304 (United States); Ganguly, Arundhuti; Girard, Erin; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Morin, Richard L. [Mayo Clinic Jacksonville, Jacksonville, Florida 32224 (United States); Dixon, Robert L. [Department of Radiology, Wake Forest University, Winston-Salem, North Carolina 27157 (United States)

    2015-08-15

    Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp) and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1

  20. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  1. Blind method of clustering for the evaluation of the dose received by personnel in two methods of administration of radiopharmaceuticals

    International Nuclear Information System (INIS)

    VerdeVelasco, J. M.; Gonzalez Gonzalez, M.; Montes Fuentes, C.; Verde Velasco, J.; Gonzalez Blanco, F. J.; Ramos Pacho, J. A.

    2013-01-01

    The difficulty for the injection of drugs marked with radioactive isotopes while syringe is located within the lead protector does that in many cases staff do it chooses to use the syringe outside the lead protector, increasing therefore the dose of radiation received. In our service raises the possibility of using a different methodology, channeling a pathway through a catheter, which allows administer, in all cases, with the syringe within the lead guard. We will check if significant differences can be seen both in the dose absorbed by the staff as in the time it takes to perform the administration of the drug using the method proposed compared injection without guard. (Author)

  2. Doses determination in UCCA treatments with LDR brachytherapy using Monte Carlo methods

    International Nuclear Information System (INIS)

    Benites R, J. L.; Vega C, H. R.

    2017-10-01

    Using Monte Carlo methods, with the code MCNP5, a gynecological mannequin and a vaginal cylinder were modeled. The spatial distribution of absorbed dose rate in uterine cervical cancer (UCCA) treatments was determined under the modality of manual brachytherapy of low dose rate (B-LDR). The design of the model included the gynecological liquid water mannequin, a vaginal cylinder applicator of Lucite (PMMA) with hemisphere termination. The applicator was formed by a vaginal cylinder 10.3 cm long and 2 cm in diameter. This cylinder was mounted on a stainless steel tube 15.2 cm long by 0.6 cm in diameter. A linear array of four radioactive sources of Cesium 137 was inserted into the tube. 13 water cells of 0.5 cm in diameter were modeled around the vaginal cylinder and the absorbed dose was calculated in these. The distribution of the fluence of gamma photons in the mesh was calculated. It was found that the distribution of the absorbed dose is symmetric for cells located in the upper and lower part of the vaginal cylinder. The values of the absorbed dose rate were estimated for the date of manufacture of the sources. This result allows the use of the law of radioactive decay to determine the dose rate at any date of a gynecological treatment of B-LDR. (Author)

  3. Application of the Monte Carlo method to estimate doses in a radioactive waste drum environment

    International Nuclear Information System (INIS)

    Rodenas, J.; Garcia, T.; Burgos, M.C.; Felipe, A.; Sanchez-Mayoral, M.L.

    2002-01-01

    During refuelling operation in a Nuclear Power Plant, filtration is used to remove non-soluble radionuclides contained in the water from reactor pool. Filter cartridges accumulate a high radioactivity, so that they are usually placed into a drum. When the operation ends up, the drum is filled with concrete and stored along with other drums containing radioactive wastes. Operators working in the refuelling plant near these radwaste drums can receive high dose rates. Therefore, it is convenient to estimate those doses to prevent risks in order to apply ALARA criterion for dose reduction to workers. The Monte Carlo method has been applied, using MCNP 4B code, to simulate the drum containing contaminated filters and estimate doses produced in the drum environment. In the paper, an analysis of the results obtained with the MCNP code has been performed. Thus, the influence on the evaluated doses of distance from drum and interposed shielding barriers has been studied. The source term has also been analysed to check the importance of the isotope composition. Two different geometric models have been considered in order to simplify calculations. Results have been compared with dose measurements in plant in order to validate the calculation procedure. This work has been developed at the Nuclear Engineering Department of the Polytechnic University of Valencia in collaboration with IBERINCO in the frame of an RD project sponsored by IBERINCO

  4. A new tissue segmentation method to calculate 3D dose in small animal radiation therapy.

    Science.gov (United States)

    Noblet, C; Delpon, G; Supiot, S; Potiron, V; Paris, F; Chiavassa, S

    2018-02-26

    In pre-clinical animal experiments, radiation delivery is usually delivered with kV photon beams, in contrast to the MV beams used in clinical irradiation, because of the small size of the animals. At this medium energy range, however, the contribution of the photoelectric effect to absorbed dose is significant. Accurate dose calculation therefore requires a more detailed tissue definition because both density (ρ) and elemental composition (Z eff ) affect the dose distribution. Moreover, when applied to cone beam CT (CBCT) acquisitions, the stoichiometric calibration of HU becomes inefficient as it is designed for highly collimated fan beam CT acquisitions. In this study, we propose an automatic tissue segmentation method of CBCT imaging that assigns both density (ρ) and elemental composition (Z eff ) in small animal dose calculation. The method is based on the relationship found between CBCT number and ρ*Z eff product computed from known materials. Monte Carlo calculations were performed to evaluate the impact of ρZ eff variation on the absorbed dose in tissues. These results led to the creation of a tissue database composed of artificial tissues interpolated from tissue values published by the ICRU. The ρZ eff method was validated by measuring transmitted doses through tissue substitute cylinders and a mouse with EBT3 film. Measurements were compared to the results of the Monte Carlo calculations. The study of the impact of ρZ eff variation over the range of materials, from ρZ eff  = 2 g.cm - 3 (lung) to 27 g.cm - 3 (cortical bone) led to the creation of 125 artificial tissues. For tissue substitute cylinders, the use of ρZ eff method led to maximal and average relative differences between the Monte Carlo results and the EBT3 measurements of 3.6% and 1.6%. Equivalent comparison for the mouse gave maximal and average relative differences of 4.4% and 1.2%, inside the 80% isodose area. Gamma analysis led to a 94.9% success rate in the 10% isodose

  5. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    Directory of Open Access Journals (Sweden)

    Tatsuhiro Gotanda

    2016-01-01

    Full Text Available Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  6. Reevaluation of nasal swab method for dose estimation at nuclear emergency accident

    International Nuclear Information System (INIS)

    Yamada, Yuji; Fukutsu, Kumiko; Kurihara, Osamu; Akashi, Makoto

    2008-01-01

    ICRP Publication 66 human respiratory tract model has been used extensively over in exposure dose assessment. It is well known that respiratory deposition efficiency of inhaled aerosol and its deposition region strongly depend on the particle size. In most of exposure accidents, however, nobody knows a size of inhaled aerosol. And thus two default aerosol sizes of 5μ in AMAD for the workers and 1μ in AMAD for the public are given as being representative in the ICRP model, but both sizes are not linked directly to the maximum dose. In this study, the most hazardous size to our health effects and how to estimate an intake activity was discussed from a viewpoint of emergency medicine. In exposure accident of alpha emitter such as Pu-239, lung monitor and bioassay measurements are not the best methods for rapid estimation with high sensitivity, so that an applicability of nasal swab method has been investigated. A computer software, LUDEP, was used in the calculation of respiratory deposition. It showed that the effective dose per unit intake activity strongly depended on the inhaled aerosol size. In case of Pu-239 dioxide aerosols, it was confirmed that the maximum of dose conversion factor was observed around 0.01μ. It means that this 0.01μ is the most hazardous size at exposure accident of Pu-239. From analysis of the relationship between AI and ET l deposition, it was found that the dose conversion factor from the activity deposited in ET l region also was affected by the aerosol size. The usage of the ICRP's default size in nasal swab method might cause obvious underestimation of the intake activity. Dose estimation based on nasal swab method is possible from safety side at nuclear emergency, and the availability in quantity should be reevaluated for emergency medicine considering of chelating agent administration. (author)

  7. Objective method to report planner-independent skin/rib maximal dose in balloon-based high dose rate (HDR) brachytherapy for breast cancer

    International Nuclear Information System (INIS)

    Kim, Yongbok; Trombetta, Mark G.

    2011-01-01

    Purpose: An objective method was proposed and compared with a manual selection method to determine planner-independent skin and rib maximal dose in balloon-based high dose rate (HDR) brachytherapy planning. Methods: The maximal dose to skin and rib was objectively extracted from a dose volume histogram (DVH) of skin and rib volumes. A virtual skin volume was produced by expanding the skin surface in three dimensions (3D) external to the breast with a certain thickness in the planning computed tomography (CT) images. Therefore, the maximal dose to this volume occurs on the skin surface the same with a conventional manual selection method. The rib was also delineated in the planning CT images and its maximal dose was extracted from its DVH. The absolute (Abdiff=|D max Man -D max DVH |) and relative (Rediff[%]=100x(|D max Man -D max DVH |)/D max DVH ) maximal skin and rib dose differences between the manual selection method (D max Man ) and the objective method (D max DVH ) were measured for 50 balloon-based HDR (25 MammoSite and 25 Contura) patients. Results: The average±standard deviation of maximal dose difference was 1.67%±1.69% of the prescribed dose (PD). No statistical difference was observed between MammoSite and Contura patients for both Abdiff and Rediff[%] values. However, a statistically significant difference (p value max >90%) compared with lower dose range (D max <90%): 2.16%±1.93% vs 1.19%±1.25% with p value of 0.0049. However, the Rediff[%] analysis eliminated the inverse square factor and there was no statistically significant difference (p value=0.8931) between high and low dose ranges. Conclusions: The objective method using volumetric information of skin and rib can determine the planner-independent maximal dose compared with the manual selection method. However, the difference was <2% of PD, on average, if appropriate attention is paid to selecting a manual dose point in 3D planning CT images.

  8. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  9. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  10. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  11. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  12. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  13. Universal field matching in craniospinal irradiation by a background-dose gradient-optimized method.

    Science.gov (United States)

    Traneus, Erik; Bizzocchi, Nicola; Fellin, Francesco; Rombi, Barbara; Farace, Paolo

    2018-01-01

    The gradient-optimized methods are overcoming the traditional feathering methods to plan field junctions in craniospinal irradiation. In this note, a new gradient-optimized technique, based on the use of a background dose, is described. Treatment planning was performed by RayStation (RaySearch Laboratories, Stockholm, Sweden) on the CT scans of a pediatric patient. Both proton (by pencil beam scanning) and photon (by volumetric modulated arc therapy) treatments were planned with three isocenters. An 'in silico' ideal background dose was created first to cover the upper-spinal target and to produce a perfect dose gradient along the upper and lower junction regions. Using it as background, the cranial and the lower-spinal beams were planned by inverse optimization to obtain dose coverage of their relevant targets and of the junction volumes. Finally, the upper-spinal beam was inversely planned after removal of the background dose and with the previously optimized beams switched on. In both proton and photon plans, the optimized cranial and the lower-spinal beams produced a perfect linear gradient in the junction regions, complementary to that produced by the optimized upper-spinal beam. The final dose distributions showed a homogeneous coverage of the targets. Our simple technique allowed to obtain high-quality gradients in the junction region. Such technique universally works for photons as well as protons and could be applicable to the TPSs that allow to manage a background dose. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  14. Benchmark results in radiative transfer

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Siewert, C.E.

    1986-02-01

    Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt

  15. Moving gantry method for electron beam dose profile measurement at extended source-to-surface distances.

    Science.gov (United States)

    Fekete, Gábor; Fodor, Emese; Pesznyák, Csilla

    2015-03-08

    A novel method has been put forward for very large electron beam profile measurement. With this method, absorbed dose profiles can be measured at any depth in a solid phantom for total skin electron therapy. Electron beam dose profiles were collected with two different methods. Profile measurements were performed at 0.2 and 1.2 cm depths with a parallel plate and a thimble chamber, respectively. 108cm × 108 cm and 45 cm × 45 cm projected size electron beams were scanned by vertically moving phantom and detector at 300 cm source-to-surface distance with 90° and 270° gantry angles. The profiles collected this way were used as reference. Afterwards, the phantom was fixed on the central axis and the gantry was rotated with certain angular steps. After applying correction for the different source-to-detector distances and incidence of angle, the profiles measured in the two different setups were compared. Correction formalism has been developed. The agreement between the cross profiles taken at the depth of maximum dose with the 'classical' scanning and with the new moving gantry method was better than 0.5 % in the measuring range from zero to 71.9 cm. Inverse square and attenuation corrections had to be applied. The profiles measured with the parallel plate chamber agree better than 1%, except for the penumbra region, where the maximum difference is 1.5%. With the moving gantry method, very large electron field profiles can be measured at any depth in a solid phantom with high accuracy and reproducibility and with much less time per step. No special instrumentation is needed. The method can be used for commissioning of very large electron beams for computer-assisted treatment planning, for designing beam modifiers to improve dose uniformity, and for verification of computed dose profiles.

  16. Variability of dose predictions for cesium-137 and radium-226 using the PRISM method

    International Nuclear Information System (INIS)

    Bergstroem, U.; Andersson, K.; Roejder, B.

    1984-01-01

    The uncertainty associated with dose predictions for cesium-137 and radium-226 in a specific ecosystem has been studied. The method used is a systematic method for determining the effect of parameter uncertainties on model prediction called PRISM. The ecosystems studied are different types of lakes where the following transport processes are included: runoff of water in the lake, irrigation, transport in soil, in groundwater and in sediment. The ecosystems are modelled by the compartment principle, using the BIOPATH-code. Seven different internal exposure pathways are included. The total dose commitment for both nuclides varies about two orders of magnitude. For cesium-137 the total dose and the uncertainty are dominated by the consumption of fish. The most important factor to the total uncertainty is the concentration factor water-fish. For radium-226 the largest contributions to the total dose are the exposure pathways, fish, milk and drinking-water. Half of the uncertainty lies in the milk dose. This uncertainty is dominated by the distribution factor for milk. (orig.)

  17. Determination of the delivered hemodialysis dose using standard methods and on-line clearance monitoring

    Directory of Open Access Journals (Sweden)

    Vlatković Vlastimir

    2006-01-01

    Full Text Available Background/aim: Delivered dialysis dose has a cumulative effect and significant influence upon the adequacy of dialysis, quality of life and development of co-morbidity at patients on dialysis. Thus, a great attention is given to the optimization of dialysis treatment. On-line Clearance Monitoring (OCM allows a precise and continuous measurement of the delivered dialysis dose. Kt/V index (K = dialyzer clearance of urea; t = dialysis time; V = patient's total body water, measured in real time is used as a unit for expressing the dialysis dose. The aim of this research was to perform a comparative assessment of the delivered dialysis dose by the application of the standard measurement methods and a module for continuous clearance monitoring. Methods. The study encompassed 105 patients who had been on the chronic hemodialysis program for more than three months, three times a week. By random choice, one treatment per each controlled patient was taken. All the treatments understood bicarbonate dialysis. The delivered dialysis dose was determined by the calculation of mathematical models: Urea Reduction Ratio (URR singlepool index Kt/V (spKt/V and by the application of OCM. Results. Urea Reduction Ratio was the most sensitive parameter for the assessment and, at the same time, it was in the strongest correlation with the other two, spKt/V indexes and OCM. The values pointed out an adequate dialysis dose. The URR values were significantly higher in women than in men, p < 0.05. The other applied model for the delivered dialysis dose measurement was Kt/V index. The obtained values showed that the dialysis dose was adequate, and that, according to this parameter, the women had significantly better dialysis, then the men p < 0.05. According to the OCM, the average value was slightly lower than the adequate one. The women had a satisfactory dialysis according to this index as well, while the delivered dialysis dose was insufficient in men. The difference

  18. A dose assessment method for arbitrary geometries with virtual reality in the nuclear facilities decommissioning

    Science.gov (United States)

    Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu

    2018-03-01

    During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.

  19. Application of combined TLD and CR-39 PNTD method for measurement of total dose and dose equivalent on ISS

    International Nuclear Information System (INIS)

    Benton, E.R.; Deme, S.; Apathy, I.

    2006-01-01

    To date, no single passive detector has been found that measures dose equivalent from ionizing radiation exposure in low-Earth orbit. We have developed the I.S.S. Passive Dosimetry System (P.D.S.), utilizing a combination of TLD in the form of the self-contained Pille TLD system and stacks of CR-39 plastic nuclear track detector (P.N.T.D.) oriented in three mutually orthogonal directions, to measure total dose and dose equivalent aboard the International Space Station (I.S.S.). The Pille TLD system, consisting on an on board reader and a large number of Ca 2 SO 4 :Dy TLD cells, is used to measure absorbed dose. The Pille TLD cells are read out and annealed by the I.S.S. crew on orbit, such that dose information for any time period or condition, e.g. for E.V.A. or following a solar particle event, is immediately available. Near-tissue equivalent CR-39 P.N.T.D. provides Let spectrum, dose, and dose equivalent from charged particles of LET ∞ H 2 O ≥ 10 keV/μm, including the secondaries produced in interactions with high-energy neutrons. Dose information from CR-39 P.N.T.D. is used to correct the absorbed dose component ≥ 10 keV/μm measured in TLD to obtain total dose. Dose equivalent from CR-39 P.N.T.D. is combined with the dose component <10 keV/μm measured in TLD to obtain total dose equivalent. Dose rates ranging from 165 to 250 μGy/day and dose equivalent rates ranging from 340 to 450 μSv/day were measured aboard I.S.S. during the Expedition 2 mission in 2001. Results from the P.D.S. are consistent with those from other passive detectors tested as part of the ground-based I.C.C.H.I.B.A.N. intercomparison of space radiation dosimeters. (authors)

  20. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  1. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  2. On the absorbed dose determination method in high energy photon beams

    International Nuclear Information System (INIS)

    Scarlat, F.; Scarisoreanu, A.; Oane, M.; Mitru, E.; Avadanei, C.

    2008-01-01

    The absorbed dose determination method in water, based on standards of air kerma or exposure in high energy photon beams generated by electron with energies in the range of 1 MeV to 50 MeV is presented herein. The method is based on IAEA-398, AAPM TG-51, DIN 6800-2, IAEA-381, IAEA-277 and NACP-80 recommendations. The dosimetry equipment is composed of UNIDOS T 10005 electrometer and different ionization chambers calibrated in air kerma method in a Co 60 beam. Starting from the general formalism showed in IAEA-381, the determination of absorbed dose in water, under reference conditions in high energy photon beams, is given. This method was adopted for the secondary standard dosimetry laboratory (SSDL) in NILPRP-Bucharest

  3. Application of Monte Carlo method for dose calculation in thyroid follicle

    International Nuclear Information System (INIS)

    Silva, Frank Sinatra Gomes da

    2008-02-01

    The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)

  4. Determination method of inactivating minimal dose of gama radiation for Salmonella typhimurium

    International Nuclear Information System (INIS)

    Araujo, E.S.; Campos, H. de; Silva, D.M.

    1979-01-01

    A method for determination of minimal inactivating dose (MID) with Salmonella typhimurium is presented. This is a more efficient way to improve the irradiated vaccines. The MID found for S. thyphimurium 6.616 by binomial test was 0.55 MR. The method used allows to get a definite value for MID and requires less consumption of material, work and time in comparison with the usual procedure [pt

  5. On the absorbed dose determination method in high energy electrons beams

    International Nuclear Information System (INIS)

    Scarlat, F.; Scarisoreanu, A.; Oane, M.; Mitru, E.; Avadanei, C.

    2008-01-01

    The absorbed dose determination method in water for electron beams with energies in the range from 1 MeV to 50 MeV is presented herein. The dosimetry equipment for measurements is composed of an UNIDOS.PTW electrometer and different ionization chambers calibrated in air kerma in a Co 60 beam. Starting from the code of practice for high energy electron beams, this paper describes the method adopted by the secondary standard dosimetry laboratory (SSDL) in NILPRP - Bucharest

  6. Towards global benchmarking of food environments and policies to reduce obesity and diet-related non-communicable diseases: design and methods for nation-wide surveys.

    Science.gov (United States)

    Vandevijvere, Stefanie; Swinburn, Boyd

    2014-05-15

    Unhealthy diets are heavily driven by unhealthy food environments. The International Network for Food and Obesity/non-communicable diseases (NCDs) Research, Monitoring and Action Support (INFORMAS) has been established to reduce obesity, NCDs and their related inequalities globally. This paper describes the design and methods of the first-ever, comprehensive national survey on the healthiness of food environments and the public and private sector policies influencing them, as a first step towards global monitoring of food environments and policies. A package of 11 substudies has been identified: (1) food composition, labelling and promotion on food packages; (2) food prices, shelf space and placement of foods in different outlets (mainly supermarkets); (3) food provision in schools/early childhood education (ECE) services and outdoor food promotion around schools/ECE services; (4) density of and proximity to food outlets in communities; food promotion to children via (5) television, (6) magazines, (7) sport club sponsorships, and (8) internet and social media; (9) analysis of the impact of trade and investment agreements on food environments; (10) government policies and actions; and (11) private sector actions and practices. For the substudies on food prices, provision, promotion and retail, 'environmental equity' indicators have been developed to check progress towards reducing diet-related health inequalities. Indicators for these modules will be assessed by tertiles of area deprivation index or school deciles. International 'best practice benchmarks' will be identified, against which to compare progress of countries on improving the healthiness of their food environments and policies. This research is highly original due to the very 'upstream' approach being taken and its direct policy relevance. The detailed protocols will be offered to and adapted for countries of varying size and income in order to establish INFORMAS globally as a new monitoring initiative

  7. Retrospective methods of dose assessment of the Chernobyl 'liquidators'. A comparison

    International Nuclear Information System (INIS)

    Schmidt, M.; Ziggel, H.; Schmitz-Feuerhaake, I.; Dannheim, B.; Schikalov, V.; Usatyj, A.; Shevchenko, V.; Snigireva, G.; Serezhenkov, V.; Klevezal, G.

    1998-01-01

    A database of biomedical and dosimetric data of participants in the liquidation work at Chernobyl was set up. Dose profiles were created by using suitable dose modelling. EPR spectrometric measurements of the tooth enamel was performed as a routine method of retrospective dosimetry for radiation workers at medium to low exposures. Chromosome analyses were carried out in peripheral blood lymphocytes of a cohort of the liquidation workers. Fluorescence in-situ hybridization was also used. The number of workers volunteering to take part in the research, however, was too small to allow statistically relevant results to be obtained. (P.A.)

  8. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  9. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  10. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  11. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  12. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  13. An adaptive two-stage dose-response design method for establishing proof of concept.

    Science.gov (United States)

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  14. Passive Rn dose meters - measuring methods appropriate for large measurement series

    International Nuclear Information System (INIS)

    Urban, M.; Kiefer, H.

    1985-01-01

    Passive integrating measuring methods can be classified in several groups by their functioning principle, e.g. spray chambers or open chambers with nuclear trace detectors or TL detectors, open detectors, activated carbon dose meters with or without TL detectors. According to the functioning principle, only radon or radon and fission products can be detected. The lecture gives a survey of the present state of development of passive Rn dose meters. By the example of the Ra dose meter developed at Karlsruhe which was used in inquiry measurements carried out in Germany, Switzerland, the Netherlands, Belgium and Austria, etching technology, estimation of measuring uncertainties, reproducibility and fading behaviour shall be discussed. (orig./HP) [de

  15. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Puncher, M.; Birchall, A.; Bull, R. K.

    2012-01-01

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  16. Effects of different premature chromosome condensation method on dose-curve of 60Co γ-ray

    International Nuclear Information System (INIS)

    Guo Yicao; Yang Haoxian; Yang Yuhua; Li Xi'na; Huang Weixu; Zheng Qiaoling

    2012-01-01

    Objective: To study the effect of traditional method and improved method of the premature chromosome condensation (PCC) on the dose-effect curve of 60 Co γ ray, for choosing the rapid and accurate biological dose estimating method for the accident emergency. Methods: Collected 3 healthy male cubits venous blood (23 to 28 years old), and irradiated by 0, 1.0, 5.0, 10.0, 15.0, 20.0 Gy 60 Co γ ray (absorbed dose rate: 0.635 Gy/min). Observed the relation of dose-effect curve in the 2 incubation time (50 hours and 60 hours) of the traditional method and improved method. Used the dose-effect curve to verify the exposure of 10.0 Gy (absorbed dose rate: 0.670 Gy/min). Results: (1) In the traditional method of 50-hour culture, the PCC cell count in 15.0 Gy and 20.0 Gy was of no statistical significance. But there were statistical significance in the traditional method of 60-hours culture and improved method (50-hour culture and 60-hour culture). Used the last 3 culture methods to make dose curve. (2) In the above 3 culture methods, the related coefficient between PCC ring and exposure dose was quite close (all of more than 0.996, P 0.05), the morphology of regression straight lines almost overlap. (3) Used the above 3 dose-effect curves to estimate the irradiation results (10.0 Gy), the error was less than or equal to 8%, all of them were within the allowable range of the biological experiment (15%). Conclusion: The 3 dose-effect curves of the above 3 culture methods can apply to biological dose estimating of large doses of ionizing radiation damage. Especially the improved method of 50-hour culture,it is much faster to estimate and it should be regarded as the first choice in accident emergency. (authors)

  17. Method of estimating patient skin dose from dose displayed on medical X-ray equipment with flat panel detector

    International Nuclear Information System (INIS)

    Fukuda, Atsushi; Koshida, Kichiro; Togashi, Atsuhiko; Matsubara, Kousuke

    2004-01-01

    The International Electrotechnical Commission (IEC) has stipulated that medical X-ray equipment for interventional procedures must display radiation doses such as air kerma in free air at the interventional reference point and dose area product to establish radiation safety for patients (IEC 60601-2-43). However, it is necessary to estimate entrance skin dose for the patient from air kerma for an accurate risk assessment of radiation skin injury. To estimate entrance skin dose from displayed air kerma in free air at the interventional reference point, it is necessary to consider effective energy, the ratio of the mass-energy absorption coefficient for skin and air, and the backscatter factor. In addition, since automatic exposure control is installed in medical X-ray equipment with flat panel detectors, it is necessary to know the characteristics of control to estimate exposure dose. In order to calculate entrance skin dose under various conditions, we investigated clinical parameters such as tube voltage, tube current, pulse width, additional filter, and focal spot size, as functions of patient body size. We also measured the effective energy of X-ray exposure for the patient as a function of clinical parameter settings. We found that the conversion factor from air kerma in free air to entrance skin dose is about 1.4 for protection. (author)